entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
16
215
authors
sequencelengths
1
584
primary_category
stringclasses
117 values
categories
sequencelengths
1
7
text
stringlengths
7
396k
http://arxiv.org/abs/2405.09108v1
20240515054831
A Linear Test for Global Nonlinear Controllability
[ "Karthik Elamvazhuthi" ]
math.OC
[ "math.OC", "cs.SY", "eess.SY" ]
Complex-valued 3D atomic spectroscopy with Gaussian-assisted inline holography Saijun Wu^1 ============================================================================== It is known that if a nonlinear control affine system without drift is bracket generating, then its associated sub-Laplacian is invertible under some conditions on the domain. In this note, we investigate the converse. We show how invertibility of the sub-Laplacian operator implies a weaker form of controllability, where the reachable sets of a neighborhood of a point have full measure. From a computational point of view, one can then use the spectral gap of the (infinite-dimensional) self-adjoint operator to define a notion of degree of controllability. An essential tool to establish the converse result is to use the relation between invertibility of the sub-Laplacian to the the controllability of the corresponding continuity equation using possibly non-smooth controls. Then using Ambrosio-Gigli-Savare's superposition principle from optimal transport theory we relate it to controllability properties of the control system. While the proof can be considered of the Perron-Frobenius type, we also provide a second dual Koopman point of view. § INTRODUCTION Suppose we have a m≤ d smooth globally Lipschitz vector fields g_i ∈ C^∞(;), for i =1,...,m. A standard problem in control theory is if given x, y ∈, does there exist a control u(t) such that the system of ordinary differential equations (ODE) γ̇(t) = ∑^m_i=1u_i(t) g_i(γ(t)) γ(0) = x,   γ(1)= y By now there is an extremely rich body of work on this topic <cit.>. In this brief note, we consider the relationship between the control systems of the type (<ref>) with a related class of partial differential operators. For this, we define associated first order partial differential operators that acts on smooth functions h(x) through the operation, X_i h = ∑_j=1^d g_i^j(x) ∂/∂ x_jh , and its formal adjoint X_i^*h = ∑_j=1^d -∂/∂ x_j (g_i^j(x) h). Given these two operators, we can introduce the sub-Laplacian operator, Δ_H h = -∑_i=1^mX_i^*X_ih, The relationship between this operator and the control system (<ref>) is well understood in the partial differential equation (PDE) community. For example, if the system (<ref>) is bracket generating of order r[The system (<ref>) is said to be bracket generating of order r if the span of repeated Lie brackets of the the vector fields {g_1,...,g_m}, of up to order r has full rank at all points.] is a sufficient condition for hypoellipticity property <cit.> when the vector fields are smooth, as shown by Hörmander <cit.>. However, hypoellipticity is not a computationally testable hypothesis. Therefore, we are interested in an alternative property of the sub-Laplacian that can be used to infer controllability of the system (<ref>). For example, invertibility of the operator is one such criterion. In this regard the following is known when the system (<ref>) is considered on an open bounded set Ω. (Bracket Generation implies invertibility of the sub-Laplacian) Suppose the system (<ref>) is bracket generating of order r. Let Ω be a ϵ-δ subdomain of . then Δ_H is invertible on L^⊥_2(Ω), the subset of functions in L_2(Ω) that integrate to 0 on Ω. For the notion of ϵ-δ domain and examples, we refer the reader to <cit.>. When Ω is compact manifold without boundary this result is well known, but hard to find a specific source as the results follow very similarly as the case of the classical Riemannian Laplacian operator Δ. Particularly, the local regularity of the solutions due to subellipticity of the operator (<ref>) can be used to conclude global regularity, which implies the domain of the operator Δ_H is compact, which implies discreteness of the spectrum and hence invertibilty. The above result for domains with boundary can be found in <cit.>, stated for the case when vector fields {g_i } are left-invariant on a Lie group, homogeneous with respect to a dilation and the Lie algebra generated is nilpotent. Invertibility also holds for more general vector fields, as shown in <cit.>, owing to the results of <cit.>. This has also be observed in <cit.> with only a partial proof. These results still hold if the system is controllable but not bracket generating as long as the control metric defined by system (<ref>) is complete and continuous with respect to the Euclidean distance. Our goal in this note is to consider a converse version of the above statement. There are two views one can take on why the converse could be true. The Perron-Frobenius view and the Koopman view <cit.>. These approaches have been fruitful in using linear finite-dimensional methods for control of finite dimensional nonlinear systems <cit.>. §.§ Perron-Frobenius view In the Perron-Frobenius view, one looks at how a dynamical system pushes forward an ensemble of particles, represented by probability densities, under it's flow. In the context of control systems, the question translates to how one can use control to manipulate a family of initial conditions simultaneously. The invertibility of the operator Δ_H has been used in <cit.> to adapt Moser's theorem for transporting densities under the action of the control system (<ref>), by looking at the controllability of the corresponding continuity equation ∂ρ/∂ t + ∇_x · (∑_i^m u_i(t,x)g_i(x) ρ) =0, which describes how an ensemble of initial conditions x(0), distributed according to ρ_0, evolve under the action of the controls. While <cit.> starts with the assumption of bracket generating property of (<ref>), this assumption is not required, and one can instead just assume that the operator Δ_H is invertible, as stated in the form below. The proof then works the same way, except one just has to check the (<ref>) is solved in a weak sense. See Section <ref>. One particularly has the following. (<cit.> Invertibility of Δ_H implies controllability of continuity equation) Suppose Δ_H has a bounded inverse on L^2_⊥(Ω). Let ρ_0, ρ_1 ∈ L^2(Ω) be functions such that integrate to 1 over Ω and are uniformly bounded from below on Ω by a positive constant c>0. Then there exists control law u :(0,1) ×→ such that μ_t is a solution to the continuity equation (<ref>) where μ_t is given by μ_t = ρ_0 + t(ρ_1 - ρ_0) t∈ [0,T] Moreover, a choice of controls is the following, u_i(t,x) = X_i f(x)/ρ(t,x) for almost every (t,x)∈ (0,1) ×Ω where f = Δ_H^-1(ρ_1 - ρ_0) The validity of this theorem can be seen by expressing equation (<ref>) in the form ∂ρ/∂ t + ∑_i^m X^*_i(u_i(t,x)ρ) =0. Then the the result can be seen formally by plugging in the control (<ref>) above. In fact, invertibility of Δ_H can be used to not just control between probability densities, but even exactly track trajectories of probability densities that are sufficiently regular <cit.> (see also remark made in <cit.> for the case without boundary). The idea is that if ρ(t,x) is a trajectory of probability densities, then the controls u_i(t,x) =X_i (Δ_H^-1∂_tρ(t,x)) /ρ(t,x) achieves exact tracking for solution ρ(t,x) of (<ref>), as long as the system (<ref>) is controllable. This should be very surprising, as controllability at the ODE level (<ref>) from one point to another implies controllability of trajectories of (<ref>) along probability densities, provided the densities are sufficiently regular. Due to scant results on the boundary regularity properties of solutions of Δ_Hu = f for function f, the flow generated by (<ref>) is not necessarily well defined (an incorrect statement made to the contrary in <cit.> that the flow is a diffeomorphism). Therefore, a drawback is that the control constructed this way, even when (<ref>) is known to be controllable, is not guaranteed to be smooth (unless Ω is a compact manifold without boundary). Hence, the corresponding continuity equation (<ref>) and ODE (<ref>) can develop non-unique solutions under the action of this control law, and relating solutions of (<ref>) and (<ref>) doesn't follow from classical results on flows of ODEs with Lipschitz right hand sides. This is especially an issue when trying to prove a result in the converse direction since one cannot even conclude local regularity of the solutions in the interior of the domain from the invertibility of the operator Δ_H. Despite this hurdle of non-smoothness, one can still establish a correspondence between the two, thanks to a superposition principle proved in <cit.>, as we show in this note. The goal is to prove the following result. (Main result: Invertibility implies approximate controllability to balls) Suppose Δ_H is invertible on L^2_⊥(Ω). Then the set of states Reach^-1 (B_R(y)) ⊆Ω that can reach B_R(y), the open ball around y, along trajectories of (<ref>) has full Lebesgue measure in Ω, for every R>0 and every y ∈Ω. The result can be stated in an alternative way using the spectral gap λ of the operator Δ_H. This is attractive from a computational point of view, as one arrives at a (infinite-dimensional) convex test for nonlinear controllability. Here WH^1_Ω is the Horizontal Sobolev space defined in Section <ref>. (Spectral gap as a degree of controllability) Let λ >0. Suppose Δ_H satisfies the Poincaré inequality ∫_Ω |f(x)|^2dx ≤1/λ∫_Ω∑_i=1^m |X_i f (x)|^2dx for all f ∈ WH^1_⊥(Ω)[This is the Horizontal Sobolev space of zero mean square integrable functions that are weakly differentiable along the g_i. See section <ref>]. Then the set of states Reach^-1 (B_R(y)) ⊆Ω that can reach B_R(y) along trajectories of (<ref>) has full Lebesgue measure in Ω, for every R>0 and every y ∈Ω. An advantage of this formulation is that the value λ gives a notion of degree of controllability. The spectrum of the operator satisfies spec(-Δ_H) ⊆{ 0 }∪ [λ, ∞) (the restriction of the operator on L^2_⊥(Ω) has its spectrum contained in [λ, ∞). If ρ_0 and ρ_1 are probability densities are positive everywhere on Ω and essentially bounded on Ω. Let f = ρ_1 - ρ_0. Then from Lax-Milgram theorem <cit.> we have the bounds ∫_Ω∑_i=1^m |X_i u(x)|^2 ≤1/λf^2_2. Therefore, this gives us a bound for the feedback control (<ref>). Smaller λ>0 implies the operator Δ_H is close to being non-invertible and hence resulting in a higher difficulty in controlling the system (<ref>), and therefore approximately controlling (<ref>). One can think of the Δ_H as a nonlinear analogue of the controllability grammian for linear time invariant control systems <cit.>. §.§ Koopman View Another view that one can take on this matter is using the dual Koopman point of view by looking at how functions evaluated along the flow evolve. Suppose A is an invariant set such that B:=Ω-A is also invariant along the flows along each of the vector fields g_i, then formally X_i ξ_A = lim_t → 0ξ_A (e^t g_i(x))- ξ_A(x)/t = 0 X_i ξ_B = lim_t → 0ξ_B (e^t g_i(x))- ξ_B(x)/t = 0 for all x ∈Ω, where ξ_A and ξ_B represent the characteristic functions of A, and B, respectively and e^t g_irepresents the flow map along g_i. Therefore, X_i(1/∫_A dxξ_A-(1/∫_B dxξ_B)=0 for each i = 1,..,m. This implies Δ_H is not injective on the set of square integrable functions that integrate to 0. To formalize this argument, one needs to ensure that ξ_A and ξ_B are weakly differentiable, since they are clearly not differentiable in the classical sense, and that the first and second equality in the relation (<ref>) hold true rigorously. This is done in Section <ref>. Lastly, one would hope that this approximate notion of controllability automatically implies exact controllability of (<ref>). However, in general, the reachable sets of (<ref>) could be dense in Ω without the system being (exactly) controllable. For example, the irrational winding on the torus. See <cit.>. However, this can possibly be avoided for bilinear systems, and left invariant systems on Lie groups <cit.>. § NOTATION AND BACKGROUND Here on, we will only require that the vector fields g_i ∈ C^2(Ω̅;). In this section, we define some notation and some results that will be used to prove Theorem <ref>. Let B_R(y) denote the open ball of around a point x ∈Ω. The space L^2(Ω) is the set of square integrable functions. We will say that function f ∈ L^2(Ω) has weak derivative u along the vector field g_i, denoted by X_if ∈ L^2(Ω), if ∫_Ωu(x) ϕ(x)dx = ∫_Ωf(x)X^*_iϕ(x)dx for all ϕ∈ C^∞_0(Ω). We define the space WH^1(Ω) = { u ∈ L^2(Ω); X_i u ∈ L^2(Ω) } equipped with the Sobolev norm f_WH^1(Ω) = f_2 + ∑_i=1^mX_if_2. Additionally, WH^1_⊥ : = L^2_⊥(Ω) ∩ WH^1(Ω). We will need the form σ: WH^1(Ω) × WH^1(Ω ) →ℝ. Associated with this form, we define the operator Δ_H : 𝒟(Δ_H) → L_2(Ω) with domain D(Δ_H): { u ∈𝒟(σ);∃ f ∈ L^2(Ω)  s.t. σ(u,ϕ) = ⟨ f,ϕ⟩_2  ∀ϕ∈ WH^1(Ω)} with the operator defined by Δ_H u = f if σ(u,ϕ) = ⟨ f,ϕ⟩_2   ∀ϕ∈ WH^1(Ω) We will also need the notion of invertibility of an operator. We will say that the operator Δ_H is invertible on L^2_⊥(Ω), if there exists an operator Δ_H^-1 : L^2_⊥ (Ω)→ L^2_⊥(Ω) such that Δ_H Δ_H^-1x = x for all x ∈ L^2_⊥ and Δ^-1_H is a bounded linear operator. Sometimes, instead of invertibility, we will be happy with injectivity of the operator Δ_H. In general, injectivity is weaker than invertibility for operators on infinite dimensional spaces. For self-adjoint operators, this is the case when the range of the operator is dense without being closed. An immediate consequence of these definitions is the following result on basic properties of the operator Δ_H. The space WH^1(Ω) is complete under the norm ·_WH^1(Ω), and σ(u,u) ≥ 0 for all u ∈ WH^1(Ω). Hence, the form σ is closed and semibounded. As a consequence, the operator Δ_H is self-adjoint. The completeness of property of the space WH^1(Ω) can be found in <cit.>. The rest is a classical correspondence between closedness and semiboundedness of a form and self-adjointness of the corresponding operator. We will relate solutions of (<ref>) to (<ref>) through probability measures defined on the space of curves. Toward this end, let Γ : = AC^2([0,1];) be the set of absolutely continuous curves x, with |ẋ| ∈ L^2(0,1;). We define the evaluation map e_t:×Γ→ given by, e_t(x,γ) = γ(t) for all t ∈ [0,1]. The map e_t takes any measure η on ×Γ and defines a measure on through the pushforward operation, defined by (e_t)_#η(A) = η(e_t^-1(A)) for all measurable A ⊆. We define the set Γ_ad⊂Γ by Γ_ad={ (x,γ) ∈Γ; ∃ u ∈ L^2(0,1;ℝ^m), st.t. γ̇(t) = ∑_i=1^m u_i(t)g_i(t)   and  γ(0)=x,  γ(t) ∈Ω  ∀ t ∈ [0,1], } Let v:[0,T] ×→ be a vector field. We recall the weak notion of solution of the continuity equation. Let t ↦μ_t be a family of probability measures. Consider the equation ∂_t μ_t + ∇_x · (v(t,x)μ_t) = 0 in  × (0,1) We will say that μ_t is a distributional solution to (<ref>) if μ_t is a solution to (<ref>) if ∫_0^1 ∫_ (∂_t ϕ(x,t) + v(t,x) ·∇_x ϕ(x,t) ) dμ_t(x)dt for all ϕ∈ C^∞_c(× (0,T)) The following result will be useful in the sequel. The controls we construct will be possibly irregular. Thus negating the possibility of relating solutions of (<ref>) and (<ref>) using classical results on constructing solution of (<ref>) using the flow map of the equation (<ref>). (<cit.> Superposition principle) Let t ↦μ_t be a narrowly continuous curve on the space of probability measures on satisfying the continuity equation (<ref>) satisfying ∫_0^1∫_ |v(t,x)|^2dμ_t(x) < ∞ Then there exists a probability measure η in ×Γ such that η is concentrated on the pairs (x,γ) ∈×Γ such that γ is a solution of the differential equation γ̇(t) = v(t,γ(t))  γ(0)=x. and μ_t = (e_t)_#η for all t ∈ [0,1]. In the context of control systems, this result can be specialized in the following trivial way. Let t ↦μ_t be a narrowly continuous curve on the space of probability measures on that is supported on Ω for all t ∈ [0,1]. Additionally, suppose the curve satisfies the continuity equation (<ref>) for v(t,x) = ∑^m_i u_i(t,x)g_i(x) for a feedback control law u : [0,1] ×→^m and ∫_0^1∫_∑_i=1^m|u_i(t,x)|^2dμ_t(x) < ∞ Then there exists a probability measure η in ×Γ_ad such that η is concentrated on the pairs (x,γ) ∈×Γ_ad such that γ is a solution of the differential equation γ̇(t) = ∑_i=1^m u_i(t,γ(t))g_i(γ(t))  γ(0)=x. and μ_t = (e_t)_#η for all t ∈ [0,1]. Lastly, we define some notions of reachability that will be used to prove Theorem <ref>. A set A is reachable from B, if there exists x ∈ B, y ∈ A and a control u ∈ L^2(0,1;^m) such that the solution (<ref>) satisfies γ(0) = x and γ(1) =y, with γ∈Ω. (Forward Reachable set) The forward reachable set of a set B ⊆, is Reach(B) :={ y ∈Ω; ∃γ∈Γ_ad  s.t. γ(0) = x,  γ(1) ∈ B} (Backward reachable set) The backward reachable set of a set B ⊆, is Reach^-1(B) := { x ∈Ω; ∃γ∈Γ_ad  s.t. γ(0) = x,  γ(1) ∈ B } Note that since the system (<ref>) doesn't have any drift, Reach^-1(B) = Reach(B) for all sets B ⊆Ω. § PROOF OF THE MAIN RESULT In this section we prove the main result (Theorem <ref>). §.§ Perron-Frobenius Approach We first make the following observation that the amount of mass flowing into a set, by moving along adissible curves, is bounded by the mass present in its backward reachable set. Suppose η is a probability measure on ×Γ is concentrated on the (closed) set Γ_ad. Suppose additionally, A,B ⊂Ω are measurable sets such that the set B is not reachable from the set A. Then (e_t)_#η(B) ≤ (e_0)_#η(Ω - A) for all t ∈ [0,1]. (e_t)_#η(B) = ∫_1_B(x) d(e_t)_#η(x) = ∫_×Γ 1_B(γ(t))dη(x,γ) ≤∫_Ω - A ×Γ 1_B(γ(t))dη(x,γ) ≤∫_×Γ 1_A - Ω(γ(0))dη(x,γ) = ∫_ 1_A - Ω(x)d(e_0)_#η(x) . = (e_0)_#η(Ω - A) Next we prove Theorem <ref> to establish that the control constructed in (<ref>) provides a solution that solves the continuity equation (<ref>) in the distributional sense. This is just reproving the result of <cit.> under the weaker setting that we don't assume the flow of the corresponding differential equation (<ref>) is well defined. Define the L^2 interpolation given by ρ(t,·) = ρ_0 + t(ρ_1 - ρ_0) ∀  t ∈ [0,1] It's easy to see that ∂_t ρ(t,·)= ρ_1-ρ_0 for all t ∈ [0,T], and ∫_Ω∂_tρ(t,x)dx =0. Since Δ_H is invertible on L^2_⊥(Ω), there exists a unique solution f = Δ_H^-1 (ρ_1 - ρ_0). By definition of the operator Δ_H we know that - ∫_Ω∂_tρ(t,x)ϕ(t,x)dx = -∫_Ω(∑_i=1^m X_i f(x)) ( ∑_i=1^m X_iϕ(t,x) ) dx for all t ∈ (0,1), ϕ∈ C_c^∞((0,1) ×). This implies that ∫_Ωρ(t,x)∂_tϕ(t,x)dx = -∫_Ω(∑_i=1^m g_i(x)X_i f(x)) · ( ∇_x ϕ(t,x) ) dx From this we can conclude ∫_Ωρ(t,x)∂_tϕ(t,x)dx = -∫_Ω(∑_i=1^m g_i(x)X_i f(x)/ρ(t,x) ) · ( ∇_x ϕ(t,x) ) ρ(t,x)dxdt This can be rewritten as ∫_Ω ( ∂_tϕ(t,x)dx +∫_Ω(∑_i=1^m g_i(x)X_i f(x)/ρ(t,x) ) · ( ∇_x ϕ(t,x) ) ) ρ(t,x)dxdt = 0 for all t ∈ (0,1), ϕ∈ C_c^∞((0,1) ×). And hence, ∫_0^1 ∫_Ω ( ∂_tϕ(t,x)dx +∫_Ω(∑_i=1^m g_i(x)X_i f(x)/ρ(t,x) ) · ( ∇_x ϕ(t,x) ) ) ρ(t,x)dxdt = 0 ϕ∈ C_c^∞((0,1) ×). By extending ρ trivially to 0 outside the domain Ω, the integral with respect to x can be taken over instead of Ω. Next we are ready to prove our main result. Proof of Theorem <ref> Suppose the restriction of Δ_H is invertible on L^2_⊥(Ω). Let K_α(r) = e^-r^2/δ for all r ∈. We define ρ_0, ρ^α_1 ∈ L^2(Ω) by ρ_0(p) = 1/∫_Ωdz ,    ρ^α_1(p)= 1/∫_Ω K_α(z-y)dz K_α(p-y) for almost every p ∈Ω. The function ρ^δ_1 weakly converges to the Dirac measure concentrated on y, denoted by δ_y, as probability measures, as α→ 0. Let u_i(t,x) be the controls transporting the solution of the continuity equation from ρ_0 to ρ^δ_1 as stated in Theorem <ref>. Since the operator Δ^-1_H defines a bounded inverse on L^2_⊥(Ω), and ρ^_0 and ρ^δ_1 are uniformly bounded from below, we can conclude that f = Δ^-1_H (ρ^δ_1 - ρ^_0) is bounded in WH^1(Ω). This implies the control u(t,x) satisfies, ∫_0^1 ∫_Ω∑_i=1^m|u_i(t,x)|^2ρ(t,x)dx <∞ with ρ(t, ·) = ρ_1 + t (ρ_1 -ρ_0) is supported in Ω, for all t ∈ [0,T]. By extending u(t,x) and ρ(t,x) trivially to 0 outside the domain Ω we have, ∫_0^1 ∫_∑_i=1^m|u_i(t,x)|^2ρ(t,x)dx <∞ Since ρ is continuous in L^2(Ω) with respect to time, it is also narrowly continuous (since t ↦∫_ϕ(t,x) ρ(t,x)dx is continuous for every contiuous function ϕ). It follows then from Corollary <ref>, there exists a probability measure η in ×Γ such that η is concentrated on solution pairs (x,γ) such that γ∈ AC^2(0,1;) the set of absolutely continuous curves that are solutions of the equation γ̇(t) = ∑_i=1^m u_i(t,γ(t))g_i(γ(t))  γ(0)=x. Fix ε>0. For every R>0, there exists δ >0 such that ∫_B_R(y)ρ^δ_1(x)dx > 1-ε. If there exists a set A ⊆Ω of non-zero Lebesgue measure such that B is not reachable from A ⊆, then we would have that ∫_B_R(y)ρ^δdx ≤∫_Ω - Adx according to Lemma <ref>. This results in a contradiction. §.§ Koopman Approach In this section, we develop a dual point of view proving essentially the same result as in Theorem <ref>. Particularly, we look at how observables evolve along the flow, instead of measure as we did in the previous section. Given this result we prove the following result. Suppose the restriction of the operator Δ_H is injective in L^2_⊥(Ω), then Reach(B_R(y)) has full Lebesgue measure in Ω, for every R>0 and every y ∈Ω. Let A= Reach(B_R(y)) for some R>0, and some y ∈Ω Additionally, let B= Ω - A. Let f = 1/∫_Adxξ_A - 1/∫_Bdxξ_B, where ξ_A and ξ_B denote the characteristic functions of set A and B, respectively. First, we note that f(e^tg_i(x)) = constant for all t small enough, for all x ∈Ω. Thus, we can compute the intrinsic derivative of f along the flow e^t g_i of the vector-field g_i to get, d/dtf(e^tg_i(x))|_t=0 = 0 for all x ∈Ω. Therefore, 0 is a candidate for the weak derivative X_if of f. But since f is not differentiable in the classical sense, it is not clear if X_if exists as a weak derivative, and whether we can conclude X_if (x)=d/dtf(e^tg_i(x))|_t=0. Toward this end, we follow the idea in <cit.> to relate intrinsic derivatives along vector fields and weak derivatives. We write the expression ∫_Ωf(x)X^*_iϕ(x)dx = -∫_Ω f(x)X(x) ϕ (x)dx - ∫_Ω f(x) ϕ(x) ∑_j=1^d ∂_x_jg_i^j(x)dx Let ϕ∈ C^∞_0(Ω). According to the computation in proof of <cit.>, -∫_Ω f(x)Xϕ(x)dx = -lim_t → 0{∫_Ω1/t[f(e^-tg_i(x))- f(x)]ϕ(y)dy - ∫_Ω f(e^-tg_i(x))ϕ(y)J_t(x)dx } where J^i_t denotes the Jacobian of the flow e^tg_i and e^-t g_i denotes the inverse of the flow. The first term is 0 because of the computation in (<ref>). For the second term, we can apply Lebesgue's theorem as in proof of <cit.> to get -∫_Ω f(x)Xϕ(x)dx = ∫_Ω f(x) ϕ(x) ∑_j=1^d ∂_x_jg_i^j(x)dx Substituting this in (<ref>) we get, ∫_Ωf(x)X^*_iϕ(x)dx =0 Comparing this with the definition of the weak derivative (<ref>) we can conclude that the weak-derivative X_if exists and is equal to 0. Therefore X_if = 0 for all i = 1,...,m. This implies that σ(f,f) is 0, and hence Δ_H f = 0. Thus, Δ_H cannot be injective unless either A or B have measure 0. Since A contains B_R(y), this must imply that Ω - A = B has measure 0. This concludes the proof. Note that in Theorem <ref>, we have assumed less on the operator (injectivity) in comparison with the invertibility requirement in <ref>. One can indeed relax this assumption in Theorem <ref>. However, the proof becomes a bit longer, as one has to approximate the functions ρ_0 and ρ^α_1 in the proof using elements in the range of the operator Δ_H. This can be done because injectivity and self-adjointness imply the range of the operator is dense. However, if the operator is injective, but not invertible, then the Poincaré inequality (<ref>) cannot hold anymore. (The case of Ω =) One cannot expect the operator Δ_H to be invertible on L_2(), as invertibility even fails for the usual Laplacian due to the presence of harmonic functions. In this case, one can instead consider the weighted Laplacian operator, Δ^a_H h = -∑_i=1^mX_i(a(x)X_ih), for a suitable weight function a ∈ L^1(Ω) ∩ L^∞(Ω). Then the Poincaré test becomes ∫_Ω |f(x)|^2a(x)dx ≤1/λ∫_Ω∑_i=1^m |X_i f (x)|^2a(x)dx (The case of compact manifolds without boundary) The results of this note can easily be extended to the case of manifold without boundary. A similar superposition principle as in Lemma <ref> can be found in <cit.>, proved for flows on manifolds. § ACKNOWLDEGEMENTS The author thanks Rohit Gupta, Matthias Kawski and Emmanuel Trélat for helpful comments and suggestions. plain
http://arxiv.org/abs/2405.09350v1
20240515140020
Digging into the ultraviolet luminosity functions of galaxies at high redshifts: galaxies evolution, reionization, and cosmological parameters
[ "Yi-Ying Wang", "Lei Lei", "Shao-Peng Tang", "Guan-Wen Yuan", "Yi-Zhong Fan" ]
astro-ph.CO
[ "astro-ph.CO", "astro-ph.GA", "astro-ph.HE" ]
Yi-Zhong Fan yzfan@pmo.ac.cn 0000-0003-1215-6443]Yi-Ying Wang Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210033, People's Republic of China School of Astronomy and Space Science, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China 0000-0003-4631-1915]Lei Lei Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210033, People's Republic of China School of Astronomy and Space Science, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China 0000-0001-9120-7733]Shao-Peng Tang Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210033, People's Republic of China 0000-0002-4538-8526]Guan-Wen Yuan Department of Physics, University of Trento, Via Sommarive 14, 38123 Povo (TN), Italy Trento Institute for Fundamental Physics and Applications (TIFPA)-INFN, Via Sommarive 14, 38123 Povo (TN), Italy 0000-0002-8966-6911]Yi-Zhong Fan Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210033, People's Republic of China School of Astronomy and Space Science, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China Thanks to the successful performance of the James Webb Space Telescope, our understanding of the epoch of reionization of the Universe has been advanced. The ultraviolet luminosity functions (UV LFs) of galaxies span a wide range of redshift, not only revealing the connection between galaxies and dark matter (DM) halos but also providing the information during reionization. In this work, we develop a model connecting galaxy counts and apparent magnitude based on UV LFs, which incorporates redshift-dependent star formation efficiency (SFE) and corrections for dust attenuation. By synthesizing some observations across the redshift range 4≤ z ≤ 10 from various galaxy surveys, we discern the evolving SFE with increasing redshift and DM halo mass through model fitting. Subsequent analyses indicate that the Thomson scattering optical depth was τ_ e = 0.052^+0.003_-0.002 and the epoch of reionization started (ended) at z=20.58^+6.25_-6.75 (z=5.38^+0.65_-0.70) which is insensitive to the choice of the truncated magnitude of the UV LFs. Incorporating additional dataset and some reasonable constraints, the amplitude of matter perturbation is found to be σ_8=0.79±0.05, which is consistent with the standard ΛCDM model. Future galaxy surveys and the dynamical simulations of galaxy evolution will break the degeneracy between SFE and cosmological parameters, improving the accuracy and the precision of the UV LF model further. § INTRODUCTION Following the formation of the first stars, the Universe emerged from its dark ages as the neutral hydrogen gas gradually became ionized under the illumination of the first lights. During the epoch of reionization, there has been debate regarding the primary energy sources responsible for rendering the universe transparent. A recent study <cit.> found that the contribution of quasars was negligible, providing less than 7% of the total required photons. Based on the ultra-deep James Webb Space Telescope (JWST) imaging, <cit.> analyzed eight ultra-faint galaxies and demonstrated that the majority of the necessary photons originated from dwarf galaxies, highlighting the significant connection between reionization and galaxies. However, researches on the evolution of the neutral hydrogen fraction (x_ HI) across different redshifts yielded various results <cit.>, accompanied by notable uncertainties and discrepancies. For instance, <cit.> imposed a constraint of x_ HI>0.9 at z=10.17 whereas <cit.> derived x_ HI<0.88 at z=10.6. The UV luminosity functions (UV LFs) of galaxies span a wide range of redshift from cosmic dawn to cosmic noon. Currently, the observations of UV LFs have extended up to z∼15-16 <cit.>, offering opportunities to explore the epoch of reionization. Since galaxies were formed inside the dark matter (DM) halos, their abundance intricately tied to the distribution of DM halos <cit.>. The UV LFs represent the number density of galaxies at specific luminosity, serving as an observable metric that links galaxies with their DM halos. This connection unveils the underlying physical mechanisms governing galaxy formation and evolution, particularly the star formation efficiency (SFE), which quantifies the effectiveness of converting gas into stars. Based on the assumption that the most massive galaxies inhabit the most massive DM halos, the SFE of central galaxies can be derived by the abundance matching between the DM mass function and UV LFs <cit.>. As DM halo masses increase, the profile of SFE reflects the influence from diversified feedback processes, including AGN feedback <cit.>, supernovae feedback <cit.>, dynamical friction effect and others <cit.>. Only in the local universe around z∼0, the evolution of the SFE with increasing DM halo mass has almost been ascertained (see <cit.> for a review). However, it remains unclear whether the SFE evolves with redshift, and if it does, the nature of its evolutionary trajectory is yet to be known. Nevertheless, by establishing connections between DM halo mass and galaxy luminosity (or mass), UV LFs provide a promising avenue to extend SFE investigations to higher redshift ranges <cit.>. Besides exploring the epoch of reionization and the SFE, UV LFs provide additional capabilities for cosmological constraints <cit.>. At high redshift, UV LFs possess the ability to extract the information within Mpc scales. For instance, <cit.> measured the matter power spectrum p(k) at wavenumbers of 0.5 Mpc^-1 < k < 10 Mpc^-1 to roughly 30% precision by UV LFs at the range of 4 ≤ z ≤ 10. Given that the number density of DM halos is sensitive to the amplitude of mass fluctuations σ_8, UV LFs can be used to constrain σ_8 at high redshift, alleviating the σ_8 tension potentially <cit.>. Furthermore, UV LFs provide a way to examine theoretical scenarios beyond the ΛCDM model, such as the warm DM, the ultralight axion DM and the effect of primordial non-Gaussianities <cit.>. In this work, we analyze the UV LFs data with a universal model to depict the evolution of galaxies and the process of reionization. To mitigate the biases arising from differing cosmological framework[Different observations may based on different cosmological assumptions. For instance, the usual choices include H_0=70 km s^-1 Mpc^-1, Ω_m =0.3 and H_0=67.6 km s^-1 Mpc^-1, Ω_m =0.307.], we rescale the observations and the UV LFs model to the galaxy counts and apparent magnitude. Differing from our previous research <cit.>, the function of SFE in this study is redshift-dependent and demonstrates a distinct evolution with increasing redshift. In that case, the feedback strength that suppresses the birth of stellar displays a more continuous variation. The derived UV luminosity density is tightly constrained within a narrow range and is well consistent with findings from other studies. Additionally, by incorporating supplementary observations, we enhance constraints on the entire evolution of x_ HI, the beginning redshift of reionization and the Thomson scattering optical depth τ_ e. We also estimate the values of H_0 and Ω_m using an additional dataset, as these two cosmological parameters can not be effectively constrained by the UV LFs observations alone. Furthermore, we constrain σ_8 within a reasonable range based on plausible assumptions. § METHOD Building on previous works <cit.>, we refine the UV LF model Φ(M_ UV) to elucidate the relationship between the distribution of DM halos and the UV LFs observations. Our analysis spans redshifts ranging from 4 to 10, adopting the DM mass function ϕ(M_ h) characterized by the “Sheth-Mo-Tormen" function <cit.>. The calculations of the transfer function within ϕ(M_ h) are conducted using the CAMB software <cit.>, which accommodates all parameter spaces of the Friedmann-Robertson-Walker models. The mathematical representation of the DM mass function is readily available through the Python package HMFcal <cit.>, which supports computations across multiple cosmological frameworks. Subsequently, the DM mass function is transformed into Φ(M_ UV) using Φ(M_ UV) = ϕ(M_ h)| dM_ h/ dM_ UV|, where | dM_ h/ dM_ UV| is the Jacobian determinant. In the AB magnitude system, M_ UV denotes the intrinsic ultraviolet magnitude <cit.>, which is related to the UV luminosity L_ UV according to the equation M_ UV=-2.5log_10( L_ UV/ erg s^-1)+51.63. The UV luminosity can be straightforwardly derived from the star formation rate (SFR) <cit.>. Employing the Salpeter initial mass function (IMF) <cit.> at a wavelength of λ_0=1500Å, the specific UV luminosity is expressed as L_ UV = 1/1.15×10^28×[ erg s^-1 Hz^-1/yr^-1 M_⊙] × SFR. Specifically, the SFR is contingent upon the total baryonic inflow rate Ṁ_ b into the DM halo and the star formation efficiency (SFE) ϵ. Consequently, the SFR can be formulated as SFR=ϵṀ_b, where Ṁ_ b is defined as Ṁ_ h f_ b, with f_ b being the baryonic fraction Ω_b/Ω_m=0.156 <cit.>. Different from our previous work <cit.>, which focused on calculating the DM accretion rate within the ΛCDM framework, this study employs a more comprehensive accretion model to facilitate the analysis of cosmological parameters. We utilize an alternative analytical method, the extended Press-Schechter (EPS) theory <cit.>, to characterize the DM accretion histories. <cit.> conducted a comparison between two analytical models based on the EPS theory and three empirical models derived from cosmological simulations. Their findings confirmed the accuracy of the differential equation method for describing mass growth (Ṁ_ h= dM_ EPS/ dz) <cit.>. Therefore, we adopt their equations (A5-B7) for calculating mass accretion, incorporating adjustable cosmological parameters. In the redshift range of 4≤ z ≤ 10, we have collected observations of UV LFs from various surveys conducted by multiple telescopes. These include JWST (SMACS0723, GLASS, CEERS, COSMOS, NGDEEP, HUDF, CANUCS and JADES; <cit.>), the Hubble Space Telescope (HST) (the Ultra-Deep Field, the GOODS fields, the Hubble Frontier Fields parallel fields, and all five CANDELS fields; <cit.>) , the Subaru/Hyper Suprime-Cam survey and CFHT Large Area U-band Survey <cit.>, the Visible and Infrared Survey Telescope for Astronomy (VISTA) (COSMOS, XMM-LSS, VIDEO, and Extended Chandra Deep Field South (E-CDFS) field; <cit.>), and the UK Infrared Telescope (UKIRT) (UKIDSS UDS field; <cit.>). Prior to analysis, overlaps in magnitudes, fields, and redshift bins were eliminated to prevent duplication. Nevertheless, the UV flux emitted by galaxies is invariably absorbed by interstellar dust, especially in massive galaxies and at lower redshifts. Due to this dust extinction, such observations cannot be directly integrated into the UV LFs model. Moreover, it is necessary to harmonize all observations within a consistent cosmological framework, as differing constructions of the UV LFs rely on varied cosmological parameters, specifically matter density (Ω_ m) and the Hubble constant (H_0). §.§ Calibrating the observations to intrinsic magnitude At first, we address the issue of dust extinction. For a spectrum modeled as f_λ∼λ^β, the UV-continuum slope β and its magnitude are linearly related as ⟨β⟩ = ( dβ/ dM_ UV)[M_ UV-M_0]+β_0 <cit.>. <cit.> refined the A_ UV-β relationship originally proposed by <cit.>, introducing a Gaussian distribution for β at each M_ UV with a standard deviation σ_β = 0.34. Consequently, the average UV extinction ⟨ A_ UV⟩ can be expressed as ⟨ A_ UV⟩ = C_0 + 0.2 C_1^2ln(10)σ_β^2 + C_1⟨β (z,M_ UV)⟩, where σ_β = 0.34, C_0=4.54, and C_1=2.07 <cit.>. Besides, <cit.> described ⟨β⟩ as ⟨β(z,M_ UV)⟩ = {[ (β_M_0(z)-c)exp[ - dβ/ d M_0(z)[M_ UV -M_0]/β_M_0(z)-c] +c, M_ UV≥ M_0; dβ/ dM_0(z)[M_ UV -M_0] + β_M_0(z), M_ UV<M_0, ] . where c=-2.33, M_0=-19.5. The values for β_M_0 and dβ/ dM_0 were sourced from <cit.>. This formulation allows for the transformation of observed UV LFs into an intrinsic framework by adjusting observed magnitudes, M_ UV( obs), using the relation M_ UV( int)=M_ UV( obs)-⟨ A_ UV⟩ (z,M_ UV). §.§ Eliminating the artificial biases Subsequently, we attempt to mitigate the artificial biases present in the observations of the UV LFs. The UV LFs at a specific magnitude is calculated by dividing the number of detected galaxies by the comoving volume of the surveyed area. In our analysis, one objective is to derive various cosmological parameters from the UV LFs. Accordingly, it is essential to rescale the observations and their associated errors, e.g., Φ_ UV( new) =Φ_ UV( old) ×V_ c( old)/V_ c( new), σ_ UV( new) =σ_ UV( old) ×V_ c( old)/V_ c( new), as suggested by <cit.>. However, this adjustment may prove to be inadequate. Considering a standard likelihood function ℒ = ∏^N_i1/√(2π)σ_ UV,iexp[-1/2(Φ_ model(M_ i) - Φ_ UV,i/σ_ UV,i)^2 ], it is evident that a smaller σ_ UV leads to an increased likelihood value. Consequently, the estimation of cosmological constants such as H_0 or Ω_ m could be significantly underestimated in Bayesian analysis due to the variability of σ_ UV. To ensure that the “observations" and their associated errors are insensitive to cosmological parameters and remain constant during Bayesian inference, we transform the “observations" from density Φ_ UV and absolute magnitude M_ UV, to the directly observed quantities: the number N of galaxies and their apparent magnitude m_ UV, i.e, N = V_ c( old) Φ( old), m_ UV =M_ UV( old) + 5log_10(1+z)d_ L( old)/ Mpc + 25, where the old values are based on the cosmological parameters that each UV LFs observation assumed. This treatment also mitigates the bias across various galaxy surveys, which arises from differing assumptions in cosmological frameworks. Consequently, the UV LF model referenced in <ref> and <ref> would be modified accordingly. With the above corrections, the complete conversion is given by N( new) =V_ c( old) Φ_ UV( old)|Δ m_ UV( old)/Δ m_ UV( new)|, σ_N( new) =V_ c( old) σ_ UV( old)|Δ m_ UV( old)/Δ m_ UV( new)|, where m_ UV( new) = M_ UV( old) - ⟨ A_ UV⟩(z,M_ UV( old)) +5log_10(1+z) d_ L( old)/ Mpc+25, |Δ m_ UV( old)/Δ m_ UV( new)| = Δ M_ UV ( old)/Δ M_ UV ( old)-⟨ A_ UV⟩( z, M_ UV^+( old))+ ⟨ A_ UV⟩( z, M_ UV^-( old) ), Δ M_ UV represents the bin width, and M_ UV^+/-( old)=M_ UV±Δ M_ UV/2. §.§ Redshift-dependent star formation efficiency SFE is a critical ingredient in UV LF model, as it directly influences SFR which further impacts UV LFs. Previously, it was widely assumed that SFE was either constant or solely dependent on the mass of DM halos, i.e., ϵ=ϵ_0  or 2ϵ_ N/( M_ h/M_1) ^-β + ( M_ h/M_1)^γ, while being independent of redshift. Recent studies by <cit.> and <cit.>, however, have proposed models where SFE varies with redshift, incorporating parameters such as ϵ_ N, M_1, β, and γ that evolve over time. <ref> provides a comparative overview, illustrating how these parameters evolve with redshift according to two different definitions. Notably, due to significant variability observed in the blue lines of the diagram, we adopted definitions of F_2(z) where both γ and β vary with redshift. Consequently, the formulation of these SFE parameters is expressed as par(z)=10^par_i(1+z/1+z_*)^par_s, where z_* is the specific redshift, and par_i, z_* and par_s are the free variables for each SFE parameter. §.§ Bayesian analysis With the comprehensive UV LF model in hand, we aim to investigate the evolution of SFE and cosmological parameters. To accommodate the asymmetric and unequal error bars, we employed an asymmetric Gaussian distribution <cit.> as the likelihood function over the redshift range from 4 to 10, i.e., Likelihood = ∏^N_i AN(f(x_i)-y_i | c_i, d_i), where (x_i, y_i) are the observational data points, f(x_i) represents the predicted values, d_i serves as a scale parameter, and c_i is the asymmetry parameter. The priors for all primary parameters are detailed in <ref>. For an optimal balance between accuracy and efficiency, we implemented the nested sampling method, utilizing Pymultinest for Bayesian parameter estimation. We configured the analysis with 1000 live points[Preliminary tests with 500 live points yielded identical results. Given the recommendation to set the number of live points higher than the dimensionality of the parameter space <cit.>, our configuration ensures adequate convergence.] and an evidence tolerance of 0.5 to terminate the sampling process. § RESULTS In this section, we present the fitting results under two scenarios. Initially, all parameters listed in <ref> are evaluated within ΛCDM framework <cit.>. This allows for an in-depth exploration of various reionization processes through the analysis of the derived UV luminosity density. Then, we assume σ_8 to be free and consider several additional datasets to evaluate the capability of constraining cosmological parameters by UV LFs. §.§ The Evolution of SFE and UV LFs As shown in <ref>, we present the best-fit UV LFs and SFE, alongside the observations. The corresponding posterior distributions are shown in <ref>[It is worth noticing that we have tested the assumption that the SFE do not evolve with redshift (par_s=0). The (logarithmic) Bayes factor of the redshift-evolved SFE scenario compared to the non-evolving scenario is lnℬ= 473.9, which is strongly in favor of the redshift-evolution of the SFE.]. For a better view, we present the UV LFs rather than the numbers of galaxies that are used practically. It is evident that the UV LF model fits well with the observations except for the data points at the bright end, which may be attributed to the contribution from active galactic nuclear <cit.> in addition to the galaxy UV LFs. The corresponding distributions of the evolved SFE are consistent with other researches (such as Fig. 2 in <cit.>). The SFE peaks at M_1∼ 10^11.67M_⊙ for z=4 and 10^13.11 M_⊙ for z=10 (note that the latter is only obtained for extrapolation since the extremely bright end of the UV LFs has not been detected at z≥8). At low halo masses, the profile of SFE is determined by β, which remains almost constant with β∼ 0.50 across wide redshift range. Conversely, at high masses, γ dominates the profile which varies from ∼ 0.44 to ∼ 1.39, resulting in a notable steepening of the SFE. However, at z≥8, due to the absence of UV LFs at the bright end, the SFE profile at M_1≥ 10^12 M_⊙ can not be well constrained. Consequently, it remains inconclusive whether there is a reliable evolution of the SFE at high masses. Based on the fitting results of SFE, the entire UV LFs in the redshift range of 4≤ z ≤ 10 can be described. By integrating over a proper range of the absolute magnitude, the UV luminosity densities ρ_ can be calculated by ρ_ UV =∫^M_ trunc_-∞Φ_ UV(M)M_ UV(M) dM, where M_ trunc denotes the truncation magnitude of the UV LFs. In general, M_ trunc =-17 corresponds to the SFR of 0.3 M_⊙ yr^-1 <cit.>. Additionally, we consider another truncated limit with M_ trunc =-15 following <cit.> and <cit.>. As shown in <ref>, the UV luminosity densities derived from the fitted UV LFs are in accordance with other observations. The empirical equation Eq.(15) in <cit.> fits well with the observations at low redshifts (i.e., z≤ 6.5) but tends to overestimate the UV luminosity density and the SFR density at higher redshifts. §.§ Properties of the Reionization Since the UV luminosity density covers the range of 4≤ z ≤ 10, it can be used to analyze the process of reionization. <cit.> described the evolution of the ionized hydrogen fraction 𝒬_ H_II as 𝒬̇_ H_II =ṅ_ ion/⟨ n_ H⟩ - 𝒬_ H_II/t_ tec, ṅ_ ion ≡⟨ f_ escξ_ ion⟩ρ_ UV, ⟨ n_ H⟩ = X_ pΩ_ bρ_ c/m_ H, t_ rec = 1/C_ H_IIα_ B(T) (1+Y_ p/4X_ p) ⟨ n_ H⟩ (1+z)^3, where ⟨ n_ H⟩ is the mean hydrogen number density, ṅ_ ion is the production rate of ionizing photons, and t_ tec is the average recombination time in the IGM. Other parameters are defined as follows: the primordial mass fraction of hydrogen X_ p=0.75, the primordial helium abundance Y_ p=1-X_ p, the critical mass density ρ_c=8.535 × 10^-30 g cm^-3, the mass of hydrogen atom m_ H=1.66 × 10^-24 g, the coefficient for case B recombination α_ B=2.6× 10^-13 cm^3 s^-1 <cit.>, and the clumping factor C_ H_II = 2.9 ×[(1+z)/6]^-1.1. The integral product of the ionizing photon production efficiency ξ_ ion and the escape fraction f_ esc are regarded as variables during model fitting. The evolution of 𝒬_ H_II can be depicted after considering two extra boundary conditions, 𝒬_ H_II (z_0) =1 and 𝒬_ H_II (z_ max) =0, where z_0 and z_ max are the redshift at the beginning and the end of the reionization, respectively. Whereafter, the evolution of 𝒬_ H_II can be constrained by the observations of the neutral hydrogen fraction x_ H_I (x_ H_I=1-𝒬_ H_II). These additional data were obtained through various methods, including analysis of the Lyα and Lyβ forests of quasars, distributions of Lyα equivalent width, adsorptions in Lyα damping wings, fractions of Lyα emitters, the afterglow spectrum of the gamma-ray burst, and the Gunn Peterson troughs. All of the observation results are summarized in <ref>. Besides, the Thomson optical depth to microwave background can be obtained by integrating 𝒬_ H_II, τ_e(z)=∫^z_0c(1+z^')^2/H(z')𝒬_ H_IIσ_ T⟨ n_ H⟩( 1+ηY_ p/4X_ p) dz', where σ_ T is the Thomson cross-section (σ_ T=6.65×10^-29 m^2), η=1 at z>4 and η=2 at z≤ 4. Thus, the observation of τ_ e (τ_ e = 0.054±0.007) can be used to constrain 𝒬_ H_II. The Bayesian analysis follows the same process described in <ref>. For these upper and lower limits, the likelihood functions are assumed to be uniform within the limited range and to be a half-Gaussian beyond the boundary <cit.>. The prior distributions and the posterior results of the free parameters are listed in <ref>. Using the ρ_ UV derived from UV LFs (at z>10, the ρ_ UV is extrapolated), the estimated evolution of x_ H_I is shown in <ref>. Furthermore, we find out that the reionization started at 20.58^+6.25_-6.75 (20.33^+6.20_-6.69) and ended at 5.38^+0.65_-0.70 (5.52^+0.60_-0.79) when M_ trunc is fixed to -17 (-15). Interestingly, such results are insensitive on the choice of M_ trunc. Since <ref> describes the relation between 𝒬_ H_II and τ_ e, we evaluate τ by fitting observations of x_ H_I listed in <ref>. Using ρ_ UV with M_ trunc =-17, we present the extrapolated projections of τ_ e in <ref>. Notably, our estimation is consistent with the result of <cit.>, demonstrating the validity and reliability of our model. §.§ The constraints of the cosmological parameters As discussed in <ref>, the luminosity of galaxies relies on a specific cosmological framework, particularly H_0, Ω_ m and σ_8. However, these parameters degenerate with the parameters of SFE apparently, since SFE dictate SFR, σ_8 governs the relative density of DM halos, and Ω_ m and H_0 influence the density of galaxies through regulating H(z). Therefore, additional constraints or supplementary dataset are required to narrow down the posterior parameter spaces within physical boundaries, especially for H_0 and Ω_ m because of their insensitivity to UV LFs. In light of this, we refer to the previous works of both <cit.> and <cit.>, considering three comparative cases: Case 1: To estimate σ_8, β and γ are assumed to be redshift-independence (i.e., β_s=0 and γ_s=0). Case 2: To estimate σ_8, γ is assumed to be redshift-independence (i.e., γ_s=0). Case 3: To estimate σ_8, Ω_ m and H_0. Pantheon+ dataset <cit.> and a prior constraint on Ω_ b <cit.> are incorporated, and γ is assumed to be redshift-independence (i.e., γ_s=0). The prior of the absolute B-band magnitude for the fiducial SN Ia is constrained within [-20,-18] and the prior of Ω_ b follows the Gaussian distribution with μ=0.2233 and σ=0.00036. The priors of H_0 and Ω_m follow a Uniform distribution within [50,90] and [0.05,0.99], respectively. Additionally, the maximum values of SFE are limited in [0.01, 0.5], which well covers the typical value ∼ 0.2-0.3 found in previous analysis <cit.>. Furthermore, the parameter M_1,s is bounded within [-3,3], following the constraints from <cit.>. The posterior distributions of estimated σ_8 are shown in <ref>, which are more consistent with that obtained from early universe measurement. In case 3, the contours of H_0, Ω_ m, and σ_8 are depicted in <ref>, resembling the results of <cit.>. Because we utilize a larger amount of data and align the observations within the same framework, our result of σ_8 = 0.787^+0.048_-0.047 reduces the uncertainty by ∼ 60% compared to the estimation (i.e., σ_8=0.76^+0.12_-0.14) from <cit.>. The complete posterior results of these three cases are shown in <ref>. It is worth noting that all estimated parameters are well converged except for M_1,s, which tends to converge towards a higher value close to the upper limit as <cit.> derived. Furthermore, due to the insensitivity to UV LF, the constraints on H_0 and Ω_ m are not significantly improved in comparison to that derived by sole Pantheon+ dataset. § CONCLUSIONS AND DISCUSSIONS Over the passed several years, numerous/various galaxy surveys accumulated a wealth of observations. Especially, the observed UV LFs span a wide range of redshift, from the local universe to very high-redshift (z∼16), providing the opportunity to study the epochs of cosmic dawn, reionization, and cosmic noon. In this work, we have developed a general UV LF model incorporating a redshift-dependent SFE and an alterable cosmological framework to explore the evolution of SFE, the process of reionization, and several cosmological parameters. By ‘‘correcting“ the observations to eliminate the effect of dust attenuation and reconcile the discrepancies of different cosmological frameworks, our results have higher precision compared with other works. Under the framework of ΛCDM, the UV LFs within the redshift range of z=4-10 can constrain the evolution of SFE stringently. In <ref>, the profile of SFE shows a clear tendency of evolution with redshift, particularly in the low mass range. The corresponding mass of the DM halos (∼ 10^12 M_⊙) at maximum SFE (∼ 20%) presents a shift towards higher values, although further investigation is needed/warranted. Since the variable feedback mechanism and the environmental factors influence the relative strength of SFE, disentangling the contributions of each component solely from UV LFs poses challenges. Fortunately, dynamical simulations provide comparisons for these elements<cit.>, including the IMF, stellar radiation, stellar winds, supernovae, AGN and others. Therefore, once the profile of SFE can be constrained by other methods beyond UV LFs, the cosmological researches relying on UV LFs can yield more precise results, as the intrinsic degeneracy between SFE and cosmological parameters is alleviated. Furthermore, based on the derived UV luminosity density and abundant observations of the IGM neutral fraction, the beginning and ending redshifts of the reionization epoch are constrained to 20.58^+6.25_-6.75 (20.33^+6.20_-6.69) and 5.38^+0.65_-0.70 (5.52^+0.60_-0.79) with M_ trunc=-17 and -15, respectively. If considering the observations of UV LFs at z>10, ρ_ UV will be much higher than the extrapolated ones <cit.>, but does not impact <ref> notably. We validate the reliability of our model by comparing the inferred Thomson scattering optical depth τ_ e with the result of <cit.>. Nonetheless, our model is applicable only to the nonlinear regime and not to regimes with z>10, as uncertainties in stellar populations and dust attenuation lead to large uncertainties of SFE <cit.> and other parameters. On the other hand, we attempt to analyze cosmology by UV LF observations. Similar to previous works, we find that only introducing some reasonable and additional information, the cosmological parameters can be constrained within physically meaningful scopes. Within a specific parameter space, we obtain σ_8=0.815^+0.026_-0.030, which is consistent with the ΛCDM framework. Moreover, following <cit.> and employing the same dataset and parameter spaces, the inferred σ_8=0.784^+0.051_-0.048 has a better precision, improved by ∼ 2-3 times compared to the results reported by <cit.>. It is in line with expectations since σ_8 is constrained by UV LFs mainly and the number of UV LF observations we used is over four times greater than they used, thereby reducing the error to ∼ 0.12/√(4). Besides the increase of data points, the expansion of galaxy survey areas decrease the uncertainties of cosmic variance and then improve the accuracy of cosmology analysis. In the foreseeable future, various galaxy survey projects will make remarkable progresses. The Large Synoptic Survey Telescope <cit.>, the Roman Space Telescope <cit.>, Euclid <cit.>, the Extremely Large Telescope <cit.>, and the China Space Station Telescope <cit.> will explore the Universe in deep field, providing extensive observational data with sufficient precision and covering a wide range of redshift. Furthermore, with the improving accuracy of dynamical simulation, the profile of SFE will be understood in depth, thereby breaking the intrinsic degeneracy between the SFE parameters and cosmological parameters. Consequently, through future observations of UV LFs and additional simulated constraints, there will be opportunities to analyze cosmological model, galaxy evolution, and the epoch of reionization with greater precision and to a more complete understanding in the future. § ACKNOWLEDGEMENTS This work is supported in part by NSFC under grants of No. 11921003, No. 12233011 and 12303056; S.-P. T. acknowledges support from the General Fund (No. 2023M733736) of the China Postdoctoral Science Foundation; G.-W. Y. acknowledges support from the University of Trento and the Provincia Autonoma di Trento (PAT, Autonomous Province of Trento) through the UniTrento Internal Call for Research 2023 grant “Searching for Dark Energy off the beaten track” (DARKTRACK, grant agreement no.E63C22000500003, PI: Sunny Vagnozzi). Software: Pymultinest (<cit.>, version 2.11, <https://pypi.org/project/pymultinest/>), HMFcalc (<cit.>), <https://github.com/halomod/hmf/> § THE POSTERIOR DISTRIBUTIONS OF THE PARAMETERS OF SFE MODEL Here, we present the whole posterior distributions of SFE parameters mentioned in <ref> and the detailed results for Case 1, Case 2, and Case 3 in <ref>. aasjournal
http://arxiv.org/abs/2405.09345v1
20240515135314
Comparative Performance of Fluorite-Structured Materials for Nanosupercapacitor Applications
[ "Grégoire Magagnin", "Jordan Bouaziz", "Martine Le Berre", "Sara Gonzalez", "Damien Deleruyelle", "Bertrand Vilquin" ]
physics.app-ph
[ "physics.app-ph", "cond-mat.mtrl-sci" ]
]Comparative Performance of Fluorite-Structured Materials for Nanosupercapacitor Applications Ecole Centrale de Lyon, INSA Lyon, CNRS, Universite Claude Bernard Lyon 1, CPE Lyon, INL, UMR5270, 69130 Ecully, France Ecole Centrale de Lyon, INSA Lyon, CNRS, Universite Claude Bernard Lyon 1, CPE Lyon, INL, UMR5270, 69130 Ecully, France jordan.bouaziz@ec-lyon.fr INSA Lyon, Ecole Centrale de Lyon, CNRS, Universite Claude Bernard Lyon 1, CPE Lyon, INL, UMR5270, 69621 Villeurbanne, France CNRS, INSA Lyon, Ecole Centrale de Lyon, Universite Claude Bernard Lyon 1, CPE Lyon, INL, UMR5270, INSA, Bât. Irène Joliot-Curie, 3 rue Enrico Fermi, F-69621 Villeurbanne Cedex, France INSA Lyon, Ecole Centrale de Lyon, CNRS, Universite Claude Bernard Lyon 1, CPE Lyon, INL, UMR5270, 69621 Villeurbanne, France Ecole Centrale de Lyon, INSA Lyon, CNRS, Universite Claude Bernard Lyon 1, CPE Lyon, INL, UMR5270, 69130 Ecully, France Over the last fifteen years, ferroelectric and antiferroelectric ultra thin films based on fluorite-structured materials have drawn significant attention for a wide variety of applications requiring high integration density. Antiferroelectric ZrO_2, in particular, holds significant promise for nanosupercapacitors, owing to its potential for high energy storage density (ESD) and high efficiency (η). This work assesses the potential of high-performance Hf_1-xZr_xO_2 thin films encapsulated by TiN electrodes that show linear dielectric (LD), ferroelectric (FE), and antiferroelectric (AFE) behavior. Oxides on silicon are grown by magnetron sputtering and plasma-enhanced atomic layer deposition. The electric behavior of the selected samples is enlightened by the corresponding crystalline phases observed in X-ray diffraction. A wake-up (WU) effect is observed for AFE ZrO_2, a phenomenon that has been barely reported for pure zirconium oxide and AFE materials in general. This WU effect is correlated to the disappearance of the pinched hysteresis loop (related to the superposition of a double peak in I-V curves) commonly observed for Zr-doped HfO_2 thin films after WU. ESD and η are compared for FE, AFE, and LD samples at the same electrical field (3.5MV/cm). As expected, ESD is higher for the FE sample (95J/cm^3), but η is ridiculously small (≈ 55), because of the opening of the FE hysteresis curve inducing high loss. Conversely, LD samples exhibit the highest efficiency (nearly 100), at the expense of a lower ESD. AFE ZrO_2 thin film strikes a balance between FE and LD behavior, showing reduced losses compared to the FE sample but an ESD as high as 52J/cm^3 at 3.5MV/cm. This value can be further increased up to 84J/cm^3 at a higher electrical field (4.0MV/cm), with an η of 75, among the highest values reported for fluorite-structured materials, offering promising perspectives for future optimization. § INTRODUCTION Immediate remedies are essential to address the challenges posed by the exponential increase of energy consumption. Specifically, pivotal technologies related to the fourth industrial revolution—such as the Internet of Things and Big Data—are witnessing an exponential surge in energy consumption linked to the storage, processing, and transmission of digital information <cit.>. Harnessing the potential of ferroelectric (FE) and antiferroelectric (AFE) materials compatible with complementary metal–oxide–semiconductor (CMOS) technology is a compelling strategy for the creation of energy-efficient electronic devices more specifically for energy conversion applications. The term "Fluorite structure" denotes a prevalent pattern observed in compounds represented by the formula MX_2. In this arrangement, the X ions are situated in the eight tetrahedral interstitial sites, while the M ions occupy the regular sites within a face-centered cubic structure. This structural configuration is commonly observed in various compounds, notably the mineral fluorite (CaF_2) which gave its name to the structure. Typical fluorite-structured ferroelectrics and antiferroelectrics are respectively doped hafnium oxide (HfO_2) and zirconium oxide (ZrO_2). HfO_2 has been introduced since 2007 by Intel as a high-k (high dielectric constant) in the gate stack of MOS transistors (metal oxide semiconductor) <cit.> while ZrO_2 is also widely used to fabricate DRAM cells <cit.>. In 2011, a ferroelectric phase in Si-doped HfO_2 was first reported <cit.>, followed by the discovery of ferroelectricity in a solid solution of Hf_0.5Zr_0.5O_2 (HZO) the same year <cit.>, paving the way for the re-introduction of ferroelectric materials in the existing CMOS technology. At atmosphere pressure, bulk HfO_2 and ZrO_2 have centrosymmetric non-polar crystal structures <cit.>. However, under certain conditions of TiN encapsulation, doping and/or film stress, it is possible to stabilize a meta-stable orthorhombic phase, which gives rise to ferroelectricity (FE) in HfO_2 and in HfO_2-ZrO_2 thin film solid solutions <cit.>. Contrary to antiferroelectricity originating from anti-parallel dipole moments arrangements, the origin of the AFE properties in ZrO_2 is attributed to the electric field-induced structural phase transitions between the non-polar tetragonal phase and the polar orthorhombic phase, ensuing in a double hysteresis polarization loop (PE loop) <cit.>. FE and AFE nanosupercapacitors can be used for solid-state electrostatic energy storage <cit.> where high energy storage performances has been reported in ferroelectric HfO_2 or ZrO_2 films <cit.>. The field-induced transitions observed in AFE ZrO_2 hold promise for energy storage applications. However, several physical parameters from the fluorite films can limit the energy storage performances. Ferroelectric thin films exhibit a “wake-up” (WU) effect, which corresponds to an increase of the remnant polarization with cycling. This effect depends on the amplitude and frequency of the cyclic applied voltage stress <cit.>. Another limiting factor is the film thickness scaling. Fluorite films have limited energy storage scaling properties due to the increase of the monoclinic phase proportion at large thicknesses <cit.>. Ferroelectrics (FE) excel in achieving high polarization, leading to high energy storage density (ESD). However, they have rather low efficiency, while linear dielectrics (LD) demonstrate remarkable efficiency, but low polarization <cit.>. In the context of prior research, it becomes evident that antiferroelectrics (AFE) offer an optimal balance, offering superior characteristics by combining elevated ESD and higher efficiency. Surprisingly, the exploration of these distinct attributes within the same material, thickness, and capacitor structures has been notably limited. In this context, this work aims at addressing this gap, employing ZrO_2 and HZO as prototype materials for a comprehensive investigation. This study proposes a performance assessment and comparison between AFE, FE and LD fluorites for nanosupercapacitor applications. Fluorite thin films were grown with the same chemical composition but with different deposition techniques and parameters, leading to different nanosupercapacitor electrical properties, from LD to FE and AFE. AFE fluorite thin films exhibit beyond state-of-the-art energy storage capabilities. § MATERIALS AND METHODS Capacitors were fabricated on p-doped Si (001) substrates and follow the stack Pt/TiN/oxide/TiN/Si. Details of the deposition processes for oxides are summarized in table <ref>. Sputtering is performed via AC450 magnetron sputtering chamber from Alliance Concept, while Plasma-Enhanced Atomic Layer Deposition (PEALD) is carried by a Fiji F200 apparatus from Ultratech. The first step of ZrO_2 PEALD deposition consists in the application of an O_2 plasma on the TiN bottom electrode at 300, before opening the valve of the TDMA-Zr precursor. Alternation of a complex sequence, notably including the TDMA-Zr valve opening and the dioxygen plasma is then performed to grow the AFE ZrO_2 thin film. Sputtering is performed for TiN and Pt using a metallic target of Ti and Pt. Pt/TiN top electrodes are obtained after a photolithography and lift-off process. Rapid thermal annealing (RTA) was then performed for all samples. Samples where then investigated by means of physical and electrical characterization. Glancing Incidence X-ray Diffraction (GIXRD) was performed on a Smartlab Rigaku diffractometer using a 9kW copper rotating anode, a parabolic multi layer mirror for parallel beam setting, Ni filter for CuK_α radiation selection, 0.114° aperture parallel slit analyser, and a 0D scintillating counter. The thickness of all thin films was measured by X-Ray Reflectivity (XRR) with the same instrument. Electrical characterization was carried out on 50 and 20 diameter capacitors using a probe station in a Faraday cage and a setup composed of a Keithley 4200SCS equipped with PMU. Endurance tests were performed using a custom program interfaced with the Keithley. The cycling sequence consists in bipolar voltage square pulses (commonly called set/reset sequence) until breakdown at 3.5 to 4.5 depending on the film thickness with 20 pulse duration. Polarization as a function of electric field (P-E) curves are established from the measurement sequence consisting 3 triangle pulses (DHM : Dynamic Hysteresis Measurement) with a 60 rise time. The first voltage pulse poles the polarization in a given pre-set direction, while the two other pulses measures the current response. The pulse amplitude is set to obtain a 3.5MV/cm electrical field for each samples, and for all figures, except figure <ref>, where a maximum electrical field of 4.0MV/cm is applied in order to maximize the ESD of the AFE ZrO_2. § RESULTS AND DISCUSSION The field induced electrical properties of the fluorite thin film capacitors grown by sputtering and ALD were first examined. Fig <ref> shows the electrical characteristics of the capacitors comparing FE and AFE samples with LD ones of the same chemical composition, at the same applied electric field of 3 MV/cm. On Figure <ref>, FE, AFE and LD properties of HZO and ZrO_2 are shown after 10^3 cycles. Polarization versus electrical field (solid curves) and current versus voltage (dashed curves) are systematically shown for each studied samples. For LD samples on Figure <ref>(c) and (d), as the dielectric capacitor is charging or discharging, the current is different from zero, leading to a non-zero electric displacement field for non-zero electric field. Therefore, a linear relationship between polarization and the applied electric field is expected. Due to leakage currents, the P-E loop is not totally closed and will lead to an efficiency of the energy storage close but less than 100 (the theoretical value for a perfect LD). For the FE HZO on Figure <ref>(a), after 10^3 cycles, two peaks can be observed: one at positive voltages and one at negative voltages. These peaks corresponds to polarization switching peaks, due to the FE nature of the film, attributed to the displacement of oxygen ions in HZO <cit.>. The remnant polarization of the FE HZO is 23/ ^2 on the positive side and 24/ ^2 on the negative side. The small asymmetry is attributed to the possible oxidation state of the top <cit.> or bottom <cit.> TiN electrode. Finally, for the AFE ZrO_2 sample, a state-of-the-art curve is observed after 10^3 cycles. A threshold field of about 1.0MV/cm can be seen between an LD to FE behavior, corresponding to the field induced phase transition assumed for ZrO_2 <cit.>. A very sharp linear opening is present below 1.0MV/cm followed by a narrow hysteresis loop above, with a saturation polarisation P_s as high as 20.5/ ^2. Structural characterization measurements are then conducted on each sample to elucidate the origin of the functional properties allowing for energy storage. Figure <ref> shows the XRD scans for all FE, AFE and LD samples. The FE HZO sample exhibits a distinct orthorhombic/tetragonal (o/t) peak with (111) orientation for the o-phase and (101) orientation for the t-phase around a 2θ value of 30.5. The non-centrosymmetric o-phase is typically considered the phase responsible for ferroelectricity. However, it also shows the presence of the monoclinic (m-) phase, which is centrosymmetric. The mixture of o/t phases is attributed to HZO, while only the (101)-oriented t-phase is attributed to ZrO_2 for the peak around 30.5, considering current literature explanations <cit.>. Properties of HZO thin films of (approximately 10 thick), synthesized via reactive magnetron sputtering from a Hf/Zr metallic target <cit.> on a TiN layer, were explored. GIXRD measurements revealed that, depending on the deposition working pressure in the chamber – low pressure (LP) (5e-3mbar) or high pressure (HP) (5e-2mbar) – the thin films were either monoclinic or amorphous after deposition. The ZrO_2 films grown by ALD <cit.> are amorphous after deposition. Amorphous samples exhibit the o/t peak after Rapid Thermal Annealing (RTA), while monoclinic samples remain in their monoclinic structure after RTA. For the HfO_2/ZrO_2 ceramic target and the Zr metallic target used in this study (respectively non-reactive and reactive magnetron sputtering), regardless the pressure, the HZO and ZrO_2 thin films are amorphous after deposition. After RTA, films obtained at both pressures exhibit the o/t peak, but only the HZO HP samples also show some monoclinic peaks, whereas LP samples only show the o/t peak (not shown in this paper). It was demonstrated that HZO LP samples are only tetragonal <cit.> while HP samples show a mixture of tetragonal and orthorhombic phase (in addition to the monoclinic phase for HP samples). While all samples present the o/t peak in their GIXRD scans, their electrical behaviors are vastly different. Our observations tend to show that observing a peak around 30.5 is a necessary condition to have FE or AFE properties, but it is not a sufficient condition. Structural measurements alone are generally insufficient to clearly identify the electrical nature of the thin films. Therefore, electrical characterizations in Figure <ref> are needed to conclude about the electric nature of the HZO and ZrO_2 capacitors. Endurance tests were also performed and a wake-up effect was observed. Historically, the first observation of a wake-up effect dates back to Sim et al. <cit.>. However, in Sim et al. article, the increase in P_r seems to be attributable to the increase in leakage current (fatigue phenomena) at increasing cycling counts. There is no evidence that this effect could be similar to that observed for HfO_2. In 2011, Wu et al. <cit.> continued the work of Sim et al. They coined, for the first time, the increase in Pr with the number of cycles as the "wake-up" effect. However, here again, this so-called "wake-up" effect can be attributed to the increase in leaks and phenomena of modification of space charge. In the same article, it is also noted that if the frequency decreases, P_r increases. In 2012, finally, Mueller et al. <cit.> released the first article discussing the "wake-up" effect on Si:HfO_2. They named this effect the wake-up effect in reference to the endurance test procedure carried out by Wu et al. and then spoke of a "wake-up procedure" and not a "wake-up effect." In 2013, Zhou et al. <cit.> were the first to truly study the wake-up (WU) effect and to name it as such. This time, the argument of increased leaks is no longer mentioned, although this is not conclusively proven in this article. Thanks to the electrical characterization PUND technique, it was already observed in previous works that the WU effect does not result from an increase in leakage currents <cit.>. Moreover, Zhou et al. observe that as the measurement frequency increases, Pr decreases, while it would increase at increasing pulse voltage amplitudes. It will be shown later that a WU effect is also present for AFE ZrO_2, a detailed observation of the wake up and endurance properties of the analyzed films is present in the supplemental materials. This WU effect, although almost identical from the perspective of current shifts along cycling in FE HZO and AFE ZrO_2, cannot be defined as an increase of P_r. Figure <ref>a) and b) depict current versus voltage (I-V) curves for FE HZO and AFE ZrO_2, respectively, whereas Figure <ref>c) and d) illustrate P-E loops for the same samples. Dashed lines represent the behavior of the samples in their pristine state, while solid lines indicate their behavior after 10^3 endurance cycles. At the pristine stage, FE HZO exhibits 4 switching current peaks : one pair at positive voltages and another one at negative voltages (dashed lines Figure <ref>(a)). Along cycling, each pair would progressively merge (solid lines Figure <ref>(a)), leading to the hysteresis loop on Figure <ref>c) (solid lines). For simplicity, observed peaks at the pristine stage for the FE HZO will be referred as "double peak" or "double peak phenomenon" from now on. For positive voltages, the left peak shifts towards higher voltage values, whereas the right peak shifts towards lower voltage values, as shown by the red arrows. In terms of FE domains, this implies that certain FE domains are undergoing switching at lower voltage levels, while others are switching at higher values. With an increasing number of cycles, low-voltage switching domains transition to higher voltage values, while high-voltage switching domains transition to lower values, eventually resulting in the merging of the two peaks. In FE HZO thin films grown by sputtering, this effect has been already well described <cit.>. Although WU effect is rarely mentioned for AFE, we observed similar current peak displacement for ZrO_2 than for FE HZO as highlighted by the red arrows on figure <ref>(a) and (b). In contrast to FE HZO, the pristine values for AFE ZrO_2 exhibit different signs. It has to be mentioned that P_r doesn't apply for AFE, since around zero volt AFE are showing the same behavior as LD. Nevertheless, on figure <ref>(d), the two hysteresis loops of the AFE PE curves have smaller coercive fields and a higher P_s between pristine and woken states, leading to an increase in the ESD on Figure <ref>(a) as P_s increases and loss decreases. This intriguing similarity between the double peak phenomenon in FE and the AFE behavior has already been discussed <cit.>. The non-uniform distribution of the internal electric field is likely attributed to unevenly distributed charged defects, such as oxygen vacancies, particularly near the electrodes. This asymmetry in oxygen vacancy concentration, often induced by the reduction of the doped HfO_2 layer by metal nitride electrodes, is a potential source of the internal field in the pristine material. The non-uniform distribution of oxygen vacancies near the electrodes may create an asymmetric internal field. In the process of electric field cycling, oxygen vacancies might diffuse into the bulk regions of fluorite-based films, triggering the wake-up process and resulting in the merging of switching current peaks in the case of FE HZO. Subsequent investigations have reported a redistribution of charges associated with oxygen vacancies <cit.>. Another plausible mechanism for the wake-up effect involves field-cycling-induced phase transitions <cit.>. Lomenzo et al. <cit.> initially proposed that the transition from the tetragonal (t-) to the ferroelectric orthorhombic phase (o-phase) underlies the wake-up effect. They observed a decrease in dielectric permittivity and an increase in P_r with an increasing number of electric field cycles, possibly indicating a phase transition from a non-ferroelectric phase with higher permittivity to a ferroelectric phase with lower permittivity. Additionally, Grimley al. employed scanning transmission electron microscopy (STEM) and impedance spectroscopy to observe a phase transition from monoclinic (m-) to o-phase <cit.>. In essence, phase change and the redistribution of defects can induce the pinning of domains, leading to WU in both FE and AFE <cit.>. Despite the fact that some authors have considered that the double peak is not similar to AFE phenomena <cit.>, the double peak on the positive voltage side can be considered as resulting from interactions between some negatively charged regions and positively charged regions or screened regions that define the switching current at the pristine stage. And after cycling, domains tend to homogenize, similarly to the observed phenomena for both FE HZO and AFE ZrO_2. Further investigations on the microstructure and oxygen vacancies re-organization would help to clearly determine if both phenomena have the same origin or not. FE and AFE can reach high polarization values for low applied electric fields compared to dielectrics. This material functional property can be tailored for embedded energy storage capacitors of nanometer size that can reach high current density storage with low losses for low applied electric field. The electrical storage can be assessed on the thin film capacitor by calculating the energy storage density. By definition, the total energy W stored in a capacitor (expressed in joules) is the total work done in establishing the electric field from an uncharged state <cit.> : W = ∫_0^Q V(q) dq By considering geometry factors in our Metal/Insulator/Metal (MIM) capacitors: the thickness of the insulator and the surface of the electrodes, it leads to the following expression of that the energy-storage density (ESD): W_ESD = ∫_0^E_max PdE (upon discharging) where it is considered that ESD is equal to W_ESD. As previously said, the definition for the total energy stored in equation <ref> is calculated upon charging, starting from an uncharged state, while ESD is calculated upon discharging, because the definition considers a perfect linear dielectric and therefore it does not take the losses (due to leakage current in the case of a LD) into account. Then, the loss can be calculated as : W_loss = ∫_0^E_max PdE (upon charging) - W_ESD As a consequence the efficiency (in percentage) of the charge/discharge is given by: η = W_ESD/W_ESD + W_loss× 100 These calculations for ESD, loss and η are now standard performance indicator for fluorite-based capacitors. <cit.>. Figure <ref> shows the ESD and η as a function of the number of cycles for an applied field of 3.5 MV/cm on the four analyzed samples and also for ZrO_2 cycled at 4 MV/cm. Contrary to the HZO FE layer, the breakdown field of the ZrO_2 AFE layer is higher allowing to display the film properties at 4 MV/cm. As expected, on Figure <ref>(a) one can observe the very high energy density of AFE ZrO_2 compared to the other samples. For LD HZO and ZrO_2, ESD is very low due to the low values of polarization when applying an electric field. But the samples have better endurance properties as they are not experiencing breakdown at 10^7 cycles, contrary to FE HZO and AFE ZrO_2. For FE HZO, at the early stage of cycling, the capacitor is very similar to AFE ZrO_2, because of the double peak phenomena. But as the number of cycles increases, switching peaks start to merge and ESD is therefore increasing, leading to a higher ESD for FE HZO than the one of AFE ZrO_2 at 10^3 cycles. On Figure <ref>(b) as one could expect the most efficient samples are the LD ones. One can observe that η is actually not totally equal to 100, because of leakage currents and have the lowest ESD of all samples. As the hysteresis of FE HZO is wide opened, it has the highest losses, hence it shows the lowest η of the four samples of ≈ 55. At the same time, the high P_s of the FE sample make it reach the highest ESD value at 95J/cm^3. Finally, the reason why AFE are considered as a better option for supercapacitors compared to simple FE and LD inorganic electrostatic capacitors, is because of their efficiency falling in between FE and LD, with an η of 75 while their ESD is almost as high as the one of FE samples, reaching 52J/cm^3 at 3.5MV/cm. This value can be further increased up to 84J/cm^3 at a higher electrical field (4.0MV/cm). The current literature for nanosupercapacitors using FE, AFE but also relaxor-ferroelectric (RFE) hafnium- and zirconium-based fluorite materials is compared with our results in Figure <ref>. One can observe that only few papers are showing higher ESD and η than our present work. Moreover, FE HZO is also showing excellent properties for nanosupercapacitor applications compared to what was previously observed for similar FE materials. A limiting factor to further improve the ESD and efficiency in thin films is the FE and AFE film thickness scaling. Fluorite films have limited energy storage scaling properties due to the increase of the monoclinic phase proportion at large thicknesses <cit.>. However, the ESD achieved for a thin film can be significantly enhanced by transitioning to a multilayered and three-dimensional (3D) structure <cit.>, an aspect that can be explored in future studies. This transition holds the potential to elevate the ESD by several orders of magnitude, promising new avenues for enhanced energy storage capabilities. Investigations into the multilayered and 3D architectures thus represent an exciting frontier in the quest for optimizing energy storage efficiency, offering prospects for groundbreaking advancements in the field. § CONCLUSION We investigated the potential of ferroelectric (FE) and antiferroelectric (AFE) fluorite-structured materials, such as hafnium oxide (HfO_2) and zirconium oxide (ZrO_2), for energy-efficient applications, addressing the urgent need to curb the soaring energy demands of the digital age. By integrating these materials into existing CMOS technology, it demonstrates a forward-looking approach to enhancing electronic device efficiency through advanced energy conversion mechanisms. The research provides a comprehensive analysis of the structural and electrical properties of HZO and ZrO_2 thin films, showcasing their significant potential in solid-state electrostatic energy storage. Moreover, the study compares the energy storage performances of FE, AFE, and LD samples, underlining the superior energy storage density (ESD) and efficiency of AFE materials. This finding is critical, as it highlights the promise of AFE ZrO_2 in energy storage applications, offering a balanced trade-off between high ESD and efficiency. The meticulous methodology, from synthesis to characterization, provides a robust framework for assessing the capabilities of these materials and sets a benchmark for future studies in the field. § ACKNOWLEDGMENT This work was carried out on the NanoLyon technology platform and implemented inside the NanOx4EStor project. We would like to specifically thank Céline Chevalier, Giovanni Alaimo-Galli and Jean-Charles Roux for their implication on the research project at the NanoLyon platform. This NanOx4EStor project has received funding under the Joint Call 2021 of the M-ERA.NET3, an ERA-NET Cofund supported by the European Union’s Horizon 2020 research and innovation program under grant agreement No 958174. This work was supported by the Portuguese Foundation for Science and Technology (FCT) in the framework of the M-ERA.NET NanOx4EStor Contract no. M-ERA-NET3/0003/2021, by Executive Agency for Higher Education, Research, Development and Innovation Funding (UEFISCDI) and by the Agence Nationale de la Recherche (ANR) under the contract ANR-22-MER3-0004-01. § REFERENCES unsrt
http://arxiv.org/abs/2405.09201v1
20240515091738
Quantum tomography with $τ$ leptons at the FCC-ee
[ "M. Fabbrichesi", "L. Marzola" ]
hep-ph
[ "hep-ph", "hep-ex", "quant-ph" ]
=1 equationsection equationsection tablesection figuresection =17cm =22.5cm -1 cm -0.3cm #1Eq. (<ref>) #1#2Eqs. (<ref>)–(<ref>) #1fig. (<ref>) #1sec. (<ref>) #1fig. (<ref>) #1tab. (<ref>) #1#2tabs. (<ref>)–(<ref>) #1⟨#1⟩ #1| #1| #1#1 etc. et al. e.g. i.e. -.4ex∼ .4ex< -.4ex∼ .4ex> #1http://arxiv.org/abs/#1#1
http://arxiv.org/abs/2405.10019v1
20240516120126
Anomalous radial acceleration of galaxies and clusters supports hyperconical modified gravity
[ "Robert Monjo", "Indranil Banik" ]
astro-ph.CO
[ "astro-ph.CO", "astro-ph.GA" ]
0000-0003-3100-2394]Robert Monjo Department of Algebra, Geometry and Topology, Complutense University of Madrid Pza. Ciencias 3, E-28040 Madrid, Spain, rmonjo@ucm.es 0000-0002-4123-7325]Indranil Banik Scottish Universities Physics Alliance, University of Saint Andrews, North Haugh, Saint Andrews, Fife, KY16 9SS, UK General relativity (GR) is the most successful theory of gravity, with great observational support at local scales. However, to keep GR valid at over cosmic scales, some phenomena (such as the flat galaxy rotation curves and the cosmic acceleration) require the assumption of exotic dark matter. The radial acceleration relation (RAR) indicates a tight correlation between dynamical mass and baryonic mass in galaxies and galaxy clusters. This suggests that the observations could be better explained by modified gravity theories without exotic matter. Modified Newtonian Dynamics (MOND) is an alternative theory for explaining some cases of flat galaxy rotation curves by using a new fundamental constant acceleration a_0, the so-called Milgromian parameter. However, this non-relativistic model is too rigid (with insufficient parameters) to fit the large diversity of observational phenomena. In contrast, a relativistic MOND-like gravity naturally emerges from the hyperconical model, which derives a fictitious acceleration compatible with observations. This study analyzes the compatibility of the hyperconical model with respect to RAR observations of 10 galaxy clusters obtained from HIFLUGCS and 60 high-quality SPARC galaxy rotation curves. The results show that a general relation can be fitted to most cases with only one or two parameters, with an acceptable chi-square and p-value. These findings suggest a possible way to complete the proposed modification of GR on a cosmic scale. § INTRODUCTION §.§ The dark matter missing gravity problem As is well known, observational tests of General Relativity (GR) show successful results on Solar System scales <cit.>. The good skill of standard gravity seems to be in question only on larger scales <cit.>. It is well-known that exotic cold dark matter (CDM) is required to extend GR to cosmic scales. However, the not-yet-discovered CDM particles present strong theoretical challenges, such as explaining the tight empirical relationship between observed gravitational anomalies (assimilated to CDM) and the distribution of visible baryonic matter in galaxies <cit.>. This empirical law is known as the mass-discrepancy acceleration relation (), the mass-luminosity relation <cit.>, the baryonic Tully-Fisher relation (), or the more general radial acceleration relation (). <cit.> found that the observed RAR in galaxy clusters is consistent with predictions from a semi-analytical model developed in the standard Lambda-CDM (ΛCDM) framework. To explain how the contribution of CDM is determined by that of baryons, some authors suggest that they present a strong coupling that leads to an effective law such as the MDAR/BTFR/RAR <cit.>. However, the lack of direct detection (or indirect non-gravitational) of dark matter suggests a weak or even non-existent coupling between CDM and baryons <cit.>, which is in conflict with these empirical relationships. Moreover, excess rotation occurs only where the Newtonian acceleration a_N induced by the visible matter is lower than a typical scale of about a_N≲ a_0 ≈ 1.2 × 10^-10m s^-2, suggesting that it is a space-time problem rather than a matter-type problem. This is also consistent with the deficient dark-matter halos that some relic galaxies (above of the a_0 scale) seem to indicate <cit.>. In other cases, the CDM halo hypothesis also predicts a systematically deviating relation from the observations, with densities about half of what is predicted by CDM simulations <cit.>, while the rotation curves appear to be more naturally explained by modified gravity <cit.>. The hypothesis of `dark matter' also presents difficulties in explaining some phenomena such as the absence of the expected Chandrasekhar dynamical friction in cluster collisions, falsified by more than 7 sigmas <cit.>. The lack of dynamical friction on galaxy bars is a strong argument that the central density of CDM in typical disc galaxies has to be a lot smaller than expected in standard CDM simulations <cit.>. Another example is the morphology of dwarf galaxies. According to <cit.>, observed deformations of dwarf galaxies in the Fornax Cluster and the lack of low-surface-brightness dwarfs toward its center are incompatible with ΛCDM predictions. Moreover, the dwarfs analyzed in that study have sufficiently little stellar mass that the observations cannot be explained by baryonic feedback effects, but they are consistent with the Milgromian modified Newtonian dynamics (). Therefore, most observations suggest the need to explore modified gravity as an alternative to the standard model <cit.>. §.§ Beyond the MOND paradigm The MOND paradigm has been deeply explored, from galactic dynamics to the Hubble tension, which is explained by a more efficient (early) formation of large structures such as the local supervoid <cit.>. In fact, RAR has been thoroughly analyzed for galaxy rotation curves collected from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample <cit.>. The results were anticipated over three decades ago by MOND <cit.>, although the form of the transition between the Newtonian and Milgromian regimes must be found empirically. However, the relativistic formulation of MOND was less successful. In particular, Bekenstein proposed a non-cosmological version of Tensor-Vector-Scalar (TeVeS) gravity <cit.> that predicts unstable stars on a scale of a few weeks <cit.>, which is only avoidable with an undetermined number of terms <cit.>. To solve these issues, <cit.> found that, by adding terms analogous to the FLRW action, at least the second-order expansion is free of ghost instabilities. Their model is also capable of obtaining gravitational waves traveling at the speed of light c, which was not the case with the original TeVeS. However, the authors pointed out that it needs to be embedded in a more fundamental theory. Recently, <cit.> proposed a relativistic MOND formulation based on space-time foliation by three-dimensional space-like hypersurfaces labeled by the Khronon scalar field. The idea is very similar to the Arnowitt–Deser–Misner (ADM) treatment in the dynamical embedding of the hyperconical universe <cit.>. Applying perturbation theory to the hyperconical metric, a relativistic theory with MOND phenomenology is obtained, which fits adequately to 123 SPARC galaxy rotation curves <cit.>. The cosmic acceleration derived from it is a_γ 0 := 2γ_0^-1c/t, where t is the age of the universe, c is the speed of light, and γ_0>1 is a projection parameter that translates from the ambient spacetime to the embedded manifold <cit.>. In contrast to the Milgrom constant a_0, the cosmic acceleration a_γ 0 is a variable that depends on the geometry considered (mainly the ratio between escape speed and Hubble flux). The equivalence between the a_0 and a_γ 0 scales is found for γ_0 ≈ 13 ± 3. In the limit of weak gravitational fields and low velocities, the hyperconical model is also linked to the scalar tensor vector gravity (STVG) theory, popularly known as Moffat's modified gravity (MOG). The MOG/STVG model is a fully covariant Lorentz invariant theory that includes a dynamical massive vector field and scalar fields to modify GR with a dynamical `gravitational constant'G <cit.>. In particular, it leads to an anomalous acceleration of about 2 Gα_G D^2 ≈ 1.1 × 10^-10m s^-2 ≈ 2γ_0^-1c/t for γ_0 ≈ 12, with α_G ≈ 10 and the universal MOG constant D = 6.25 × 10^3 M_⊙^1/2 kpc^-1. Fixing these parameters by using galaxy rotation curves, MOG fails to account for the observed velocity dispersion profile of Dragonfly 44 at 5.5 sigma confidence even if one allows plausible variations to its star formation history and thus stellar mass-to-light ratio: The number of parameters needed to accommodate most theories to the observations of galaxy clusters is perhaps too large and unnatural. In all cases, the phenomenological parameters (e.g., the CDM distribution profile, the ad-hoc MOND interpolating function μ, and the MOG constant D) need additional theoretical motivation. In contrast, the hyperconical model proposed by Monjo derives a natural modification of GR from minimal dynamical embedding in a (flat) five-dimensional Minkowskian spacetime <cit.>. Therefore, this Letter aims to show how the anomalous RAR of ten galaxy clusters, analyzed by <cit.> and <cit.>, are adequately modeled by the hyperconical modified gravity (HMG) of <cit.>. As <cit.> pointed out, clusters present a larger anomalous acceleration (g ∼ 10^-9m s^-2) than galaxy rotation curves (g ∼ 10^-10m s^-2), reflecting the missing baryon problem that remains a challenge for MOND in galaxy clusters <cit.>. This open issue is addressed here with the following structure: Sect. <ref> summarizes the data used and the HMG model; Sect. <ref> shows the main results in the fits and discusses predictions for galaxies and smaller systems, and finally Sect. <ref> points out the most important findings and concluding remarks. § DATA AND MODEL §.§ Observations used This study uses observational estimates of the radial acceleration relations (RAR; total gravity observed compared to Newtonian gravity due to baryons) for 10 galaxy clusters (0.0328 < z < 0.0899) that were collected from the HIghest X-ray FLUx Galaxy Cluster Sample (). In particular, the galaxy clusters considered are as follows: A0085, A1795, A2029, A2142, A3158, A0262, A2589, A3571, A0576, A0496. Moreover, to compare our results, rotation curves were collected from 60 high-quality SPARC galaxies filtered to well-measured intermediate radii <cit.>. §.§ Radial acceleration from hyperconical modified gravity (HMG) Observations were used to assess whether the empirical RAR is in agreement with HMG as developed by <cit.> and summarized here. Let g be the background metric of the so-called hyperconical universe <cit.>. The metric g is locally approximately given by g ≈ dt^2 (1- kr'^2 ) - t^2/t_0^2( dr'^2/1-kr'^2 + r'^2dΣ^2 ) - 2r't/t_0^2dr'dt/√(1-kr'^2) , where k = 1/t_0^2 is the spatial curvature for the current value t_0 ≡ 1 of the age t of the universe, while t/t_0 is a linear scale factor <cit.>, r' ≪ t_0 is the comoving distance, and Σ represents the angular coordinates. The shift and lapse terms of Eq. <ref> lead to an apparent radial spatial inhomogeneity that is assimilated as a fictitious acceleration with adequate stereographic projection coordinates, which is a candidate to explain the Hubble tension <cit.>. On the other hand, any gravitational system of mass M_sys generates a perturbation over the background metric g →ĝ (Eq. <ref>) such that kr'^2→k̂r̂'^2 := kr̂'^2 +2GM_sys/r̂'. Applying local validity of GR (Appendix <ref>), the perturbation term ĥ := ĝ-g is a key of the model (Appendix <ref>). Another key is the stereographic projection of the coordinates r' →r̂' = λ^1/2r' and t →t̂ = λ t, given by a scaling factor λ := 1/(1-γ/γ_0) that is a function of the angular position γ = sin^-1(r'/t_0) and a projection factor γ_0^-1 = γ_sys^-1cosγ_sys, where γ_sys is the characteristic angle of the gravitational system (Appendix <ref>). In an empty universe, γ_0 = γ_U / cosγ_U. We expect γ_U = 1/3π and therefore that γ_0 = 2/3π≈ 2; while the projective factor of maximum causality, γ_0^-1 = 1, arises for γ_U ≈ 0.235π as then γ_U = cosγ_U. When geodesic equations are applied to the projected time component of the perturbation ĥ_tt, a fictitious cosmic acceleration of roughly γ_0^-1c/t emerges in the spatial direction (see Appendix <ref>): |a_Tot - a_N|/c/t≈1/γ_0≈cosγ_sys/γ_sys , where a_N := GM_sys/r^2 is the Newtonian acceleration. However, a time-like component is also found in the acceleration that contributes to the total centrifugal acceleration a_C such that a_C≈√(a_N^2 + 2c/(γ_0 t)), which is useful to model galaxy rotation curves under the HMG framework <cit.>. Alternatively to Eq. <ref>, the cluster RAR is usually expressed as a quotient between total and Newtonian acceleration. That is, a_Tot/a_N≈ 1 - c/a_N γ_0 t , with factor γ_0^-1 = γ_sys^-1cosγ_sys, where the projective angle γ_sys can be estimated from the galaxy cluster approach (Eq. <ref>) or from the general model (Eq. <ref>), respectively, by considering the relative geometry (angle) between the Hubble speed v_H := r/t and the Newtonian circular speed v_C := √(GM_sys/r) or its classical escape velocity v_E := √(2) v_C, as follows: sin^2 γ_cluster(r) ≈ sin^2γ_galaxy - (sin^2γ_galaxy - sin^2 γ_U) v_E^2(r)/ε^2_H v_H^2(r) + v_E^2(r) , sin^2 γ_cluster(r) ≈ sin^2γ_U + (sin^2γ_center - sin^2 γ_U) | v_E^2(r)-ϵ^2_H v_H^2(r)/v_E^2(r) + ϵ^2_H v_H^2(r)| , where the parameter ϵ^2_H is the so-called relative density of the neighborhood (Appendix <ref>), while γ_center = π/2 and γ_U = π/3 or γ_U ≈ 0.235π can be fixed here to set a 1-parameter (ϵ_H) general model from Eq. <ref>. As a second-order approach, this study also assumes that {ϵ_H, γ_galaxy} can be free in our 2-parameter model for clusters (Eq. <ref>). § RESULTS AND DISCUSSION §.§ Fitted values Individually, fitting of Eq. <ref> for the quotient between total and Newtonian acceleration (Eq. <ref>) leads to a square root of the relative density of about ε_H = 38^+29_-11 (90% confidence level; Appendix <ref>, Fig. <ref>). Using the specific model for the clusters (Eq. <ref>), all fits provide an acceptable χ^2 (p-value < 0.667) except for the A2029 cluster, which did not pass the χ^2 test for the fixed neighborhood projective angle of γ_0 = 2 (i.e., γ_U = π/3). However, it did for γ_0 = 1 (i.e., γ_U ≈ 0.235π), which implies to use the causality limit for the cosmic acceleration instead of the empty-space limit. Globally, the correlation of RAR values (differences) with respect to the escape-Hubble approach (Eqs. <ref> and <ref>) is slightly higher (R^2 = 0.83) than with respect to the Newtonian acceleration (R^2 = 0.79). The simplest model of fixing γ_U = π/3 and using a single global parameter, ε_H = 40_-6^+8, gives a Pearson coefficient of R^2 = 0.75, while if γ_U = π/3 is replaced by γ_U = 0.235π, we get instead R^2 = 0.83 with ε_H = 60_-8^+20 (90% confidence level). Larger anomalies in acceleration are found for the higher escape speeds (v_E/v_H ∼ε_H) in clusters. However, this is the opposite for galaxies, which experience the maximum anomaly for low escape velocities (v_E/v_H < ε_H), as shown in Fig. <ref>. In clusters, the relative density between the dominant galaxy (BCG) and the neighborhood determines this opposite behavior. The value of v_E/v_H ∼ε_H points to the transition regime between small and large anomalies. §.§ Predictions for galaxy dynamics As discussed in Sect. <ref>, HMG derives a relationship between the Milgrom acceleration a_0 ≈ 1.2 × 10^-10m/s^2 and the cosmic parameter c/t ≈ 6.9 × 10^-10m/s^2, since a_0 ≈ a_γ 0 := 2γ_0^-1c/t for galaxy rotation curves, for an approximately constant γ_0^-1≈ 0.08 <cit.>. However, the geometry of gravitational systems led to a variable value of the projection factor γ_0^-1∈ (0, 1), depending on the ration between escape speed and Hubble flux. According to the general model of projective angles (Eq. <ref>), it is expected that galaxies and galaxy clusters exhibit opposite behaviors, but following the same theoretical curve. Using γ_center=π/2, γ_U=π/3, and ε_H = 56_-12^+22 (obtained from the cluster data), we apply Eqs. <ref> and <ref> to predict the behavior of 60 galaxies, whose data were collected by <cit.>. By directly applying Eq. <ref> to the escape speed of galaxies, a relative anomaly of (a_Tot - a_N)/(c/t) = γ_0^-1 between 0.05 and 0.40 is predicted, close to the observations of γ_0^-1 = 0.07_-0.02^+0.03 in the galaxy rotation curves. The wide range of γ_0^-1 in the clusters depends on the ratio v_e/v_H between the escape speed (v_e) and the Hubble flux (v_H) as well as the central projective angle 0.47π≲γ_center≲ 0.50π and the parameter ε_H ≥ 1. For galaxy rotation curves, this additional dependency is not evident beyond the usual dependence on a_N, according to deep reviews of MOND interpolating functions, showing that γ_0 is almost constant and the actual gravity only depends on the Newtonian acceleration <cit.>. This apparent weakness of the model is easily solved by the fact that an almost constant γ_0^-1 is obtained from the 2-parameter model (Eq. <ref>) with fitting γ_center (Fig. <ref> top left). Moreover, the relation of the rotation curves with v_e/v_H is highly nonlinear, since it is within trigonometric functions. Finally, the correlation between Newtonian acceleration a_N ∝ r^-2 and the flux ratio v_e/v_H ∝ r^-3/2 is very high for galaxies (R ≈ 0.90, p-value < 0.001), so the families of interpolating function f(a_N) remove almost all this nonlinear dependency. In any case, the effective interpolating function of the HMG model is compatible with the best MOND functions (Fig. <ref> top right). It is important to note that HMG predicts the form of the interpolating function, which is arbitrary in MOND and must be found from observations. Furthermore, after applying an observational constraint of Eq. <ref> to the galaxy rotation curves with γ_U = π/3, a value of ε_H = 21_-11^+32 is obtained for γ_center = π/2, and ε_H = 18_-10^+28 for γ_center = 0.48π, which are statistically compatible with the cluster-based fitting of ε_H = 56_-12^+22 (Fig. <ref> bottom left). In particular, a value of ϵ_H ≈ 45 is compatible with both datasets but with a wide variability between the different cases. However, the parameter ε_H is not free at all because a significant correlation (R > 0.85, p-value <0.001) of ε_H ∝√(ρ) is found for the galactic mass densities ρ at distances between 50-200 kpc (Appendix <ref>, Fig. <ref>), which gives ε_H ≈√(ρ/ρ_vac) for the vacuum density ρ_vac = 3/(8πGt^2). Therefore, this justifies the name of the parameter ε_H as the square root of the relative density of the neighborhood (Eq. <ref>). Finally, an empirical relationship (R > 0.80, p-value < 0.001) is also found between cosγ_center and log(ε_H) for galaxies, which suggests that γ_center strongly depends on the geometrical features of the gravitational system. §.§ Prediction for small systems For small gravitational systems, the escape velocity v_E is much higher than the Hubble flux v_H, so it is expected that the cosmic effects are negligible (with v_E/v_H ≫ε_H). This is because the ratio between escape speed and Hubble flux is independent of the size of a spherical system with constant density, but smaller systems are usually much denser than larger systems. For example, according to Eq. <ref>, an anomaly of only 6.4_-0.4^+1.0× 10^-17 m/s^2 (90% confidence level) is predicted for the Solar System at a distance of the Pluton orbit (40 AU). The predicted anomaly is even smaller for Saturn at 10 AU, which is well consistent with the null detection of anomalous effects there from Cassini radio tracking data <cit.>. For the Oort cloud, which hypothetically extends between 2 and 200 kAU, the predicted anomaly Δ a := γ_0^-1c/t increases from 2.2_-0.1^+0.4× 10^-14 m/s^2 to 2.3_-0.1^+0.4× 10^-11 m/s^2, respectively. The last one is about 20% of the Milgrom acceleration a_0 ≈ 1.2 × 10^-10 m/s^2 and could therefore be detected in the future. The most aligned finding is that shown by the work of <cit.>, who suggest that Milgromian gravity could explain the observed anomalies of extreme trans-Neptunian objects such as the Oort cloud (2–200 kAU, up to 20% of a_N). <cit.> claimed that the farthest Kuiper Belt objects (∼ 250 AU) also present a MOND signal, but the orbit integrations performed by <cit.> suggest that this interpretation neglects the crucial role of the external field direction rotating as the Sun orbits the Galaxy. Therefore, these findings require further analysis to compare them with the hypothesis of a ninth planet in the trans-Neptunian region <cit.>. In fact, <cit.> exclude the possible effects of MOND on scales up to about 5–10 kAU, which is more consistent with the findings of <cit.>. In the case of wide binaries, the typical escape speed at r ∼ 0.1 pc is about v_E = 500 m/s, while the Hubble flux is v_H = r/t ∼ 7× 10^-3 m/s. Thus, the Newtonian acceleration is a_N = 1/2 v_E^2/r ∼ 4.1 × 10^-11 m/s^2 < a_0≈ 1.2 × 10^-10 m/s^2, which is theoretically within the classical MOND regime of Milgrom theory (a_N < a_0). However, the escape flux is v_E / v_H = 7.1 × 10^4, and therefore we expect a very low anomaly (i.e., a large projective angle γ_sys). Assuming that γ_center=π/2, γ_U=π/3 and a global value ε_H = 40_-20^+30 in Eq. <ref>, the projective angle γ_sys = π/2 - 2.20_-0.35^+0.14× 10^-4π, which corresponds to a projection parameter of γ_0^-1 = γ_sys^-1cosγ_sys = 4.4^+0.7_-0.3× 10^-4, so the acceleration anomaly would be Δ a := a_Tot - a_N = γ_0^-1c/t = 3.1^+0.4_-0.2× 10^-13 m/s^2 (90% interval). The prediction corresponds to 2.8 × 10^-13 m/s^2 < Δ a < 4.3 × 10^-13 m/s^2 at the 95% confidence level. Therefore, in any case, the acceleration anomalies expected for rapid-escape systems are less than 1% of the original Milgrom constant a_0 ≈ 1.2× 10^-10 m/s^2. This result is consistent with recent comparisons between standard gravity and MOND with data from Gaia wide-binary systems <cit.>. However, some authors dispute these results by using different data selection criteria (see, for instance, ). § CONCLUDING REMARKS Acceleration is not a geometrical invariant, but depends on the reference system or framework considered. The hyperconical model showed that it is possible to derive local-scale general relativity (that is, HMG) to model gravitational systems with anomalous acceleration similar to that attributed to dark matter or dark energy <cit.>.Other MOND-based relativistic theories also obtained good performance when modeling galaxy rotation curves with a single global parameter based on acceleration. However, parameters other than acceleration are required, given that the classical MOND-based RAR does not extend to clusters and that gravity is mostly Newtonian on scales smaller than about 10 kAU with high precision, even at low acceleration. This Letter presented a generalized applicability of the HMG model for a wide range of acceleration anomalies in gravitational systems. Good agreement was obtained with the data collected from 10 galaxy clusters and 60 high-quality galaxy rotation curves. The technique developed for the perturbed metric follows the geometric definition of the sinus of a characteristic angle γ_sys as a function of the escape speed (v_E) and the Hubble flux (ϵ_H v_H), that is, sinγ_sys^2 - sin^2 γ_U ≈β^2(r)|v_E^2 - ϵ^2_H v_H^2(r)| for γ_U = π/3. The function β(r) does not depend on the speeds, but can be avoided by setting two parameters in γ_sys = γ_sys(γ_center, ε_H): a central projective angle 0.47π≲γ_center≲ 0.50π and a relative density ε_H ≥ 1. From the fitting of the general model of γ_sys (Eq. <ref>) to the cluster RAR data, an anomaly between 0.05 c/t and 0.40 c/t is predicted for the galaxy rotation dynamics, which is statistically compatible with the observations of 0.07_-0.02^+0.03c/t. As for any modified gravity, the challenge was to derive a tight RAR compatible with observations with few free parameters. Classical MOND only has a global free parameter a_0, but does not specify the interpolating function, so MOND actually has a high freedom to fit the observations of galactic dynamics. In contrast, the HMG model derives a unique interpolation function for rotation curves using only two parameters (ε_H, γ_center) that are not totally free, since they are related to the density of matter. For objects of the outer Solar System such as the farthest Kuiper Belt objects or the Oort cloud, anomalies between 10^-14 m/s^2 and 10^-11 m/s^2 are predicted at 1 kAU and at 100 kAU, respectively. Similarly, for wide binary systems, anomalies are expected within the range of 2.8 × 10^-13 < Δ a < 4.3 × 10^-13 m/s^2 (95% confidence level). Such small predicted anomalies imply that local wide binaries should be Newtonian to high precision, as is within the observational limits. This work provides a chance to falsify a wide range of predictions of a relativistic MOND-like theory that has previously collected successful results in cosmology <cit.>. In future work, we will address other open challenges, such as the modeling of cosmic structure growth and dynamics as well as the evolution of early stages of the universe. § ACKNOWLEDGEMENTS IB is supported by Science and Technology Facilities Council grant ST/V000861/1. The authors thank Prof. Stacy McGaugh for providing the data for 60 high-quality galaxy rotation curves. The data set corresponding to the 10 galaxy clusters was provided by Prof. Pengfei Li, so we greatly appreciate this kind gesture. § DATA AVAILABILITY In this study, no new data was created or measured. figuresection § PERTURBED VACUUM LAGRANGIAN DENSITY This appendix summarizes the definition of the local Einstein field equations according to the hyperconical model, that is, by assuming that GR is only valid at local scales <cit.>. In particular, the new Lagrangian density of the Einstein-Hilbert action is obtained by extracting the background scalar curvature R_hyp from the total curvature scalar R →Δ R := R - R_hyp as follows: ℒ = 1/16πGΔ R + ℒ_M = 1/16πG(R + 6/t^2) - ρ_M = c^2/16πGR - Δρ , where G is the Newtonian constant of gravitation, R_hyp = -6/t^2 is the curvature scalar of the (empty) hyperconical universe, ℒ_M = -ρ_M is the Lagrangian density of classical matter, and Δρ := ρ_M - ρ_vac is the density perturbation compared to the `vacuum energy' ρ_vac = 3/(8πGt^2) with mass-related event radius r_M := 2GM := 2Gρ_vac 4/3π t^3 = t, where M is a `total mass' linked to ρ_vac. Moreover, the squared escape velocity associated with ρ_vac at r is v_E^2(ρ_vac) = 2Gρ_vac 4/3π r^3 = r^2/t^2 = v_H^2. Therefore, a total density ρ_M leads to a total (classical) squared escape velocity v_E^2(ρ_M) as follows: v_E^2(ρ_M) = 2Gρ_M 4/3π r^3 = 2G(ρ_vac + Δρ) 4/3π r^3 = r^2/t^2 + 2GM/r = v_E^2(ρ_vac) + v_E^2(Δρ) , where we use the definition of M := Δρ 4/3π r^3. Now, let θ_M := M/M≪ 1 be a (small) constant fraction of energy corresponding to the perturbation Δρ, and r_M := 2GM = θ_M t be the radius of the mass-related event horizon. Thus, 2GM/r = θ_M t/r't/t_0 = θ_M t_0/r' =: 2 GM_0/r' . Therefore, the quotient M/r = M_0/r' is as comoving as r/t = r'/t_0. Moreover, the background metric of the universe has a Ricci tensor with components R_00^u = 0 and R_ij^u = 1/3R_u g_ij <cit.>. Since R_hyp = -6/t^2, the Einstein field equations become locally converted to <cit.>: κ P_00 = Δ R_00 - 1/2Δ R g_00 = R_00 - 1/2 R g_00 - 3/t^2g_00 κ P_ij = Δ R_ij - 1/2Δ R g_ij = R_ij - 1/2 R g_ij - 1/t^2 g_ij , where κ = 8πG and P_μν are the stress-energy tensor components. Notice that, for small variations in time Δ t = t - t_0 ≪ t_0 := 1, the last terms (3/t^2 and 1/t^2) are equivalent to consider a `cosmological (almost) constant' or dark energy with equation of state w = -1/3 (varying as a^-2). § HYPERCONICAL MODIFIED GRAVITY (HMG) §.§ Hyperconical universe and its projection This appendix reviews the main features of relativistic MOND-like modified gravity derived from the hyperconical model and referred to here as HMG <cit.>. Let H^4 be a (hyperconical) manifold with the following metric: ds^2_hyp≈ dt^2 (1- kr'^2 ) - t^2/t_0^2( dr'^2/1-kr'^2 + r'^2dΣ^2 ) - 2r't/t_0^2dr'dt/√(1-kr'^2) , where k = 1/t_0^2 is the spatial curvature for the current value t_0 ≡ 1 of the age t of the universe, while a/(t) := t/t_0 is a scale factor, r' ≪ t_0 is the comoving , and Σ represents the angular coordinates. Both the (Ricci) curvature scalar and the Friedmann equations derived for k=1 are locally equivalent to those obtained for a spatially flat (K_FLRW = 0) ΛCDM model with linear expansion <cit.>. In particular, the local curvature scalar at every point (r'≡ 0) is equal to <cit.>: R_hyp = -6/t^2 = R_FLRW|_K = 0, a = t/t_0 , as for a three-sphere (of radius t). This is not accidental because, according to <cit.>, the local conservative condition in dynamical systems only ensures internal consistency for k=1. The hyperconical metric (Eq. <ref>) has shift and lapse terms that produce an apparent radial inhomogeneity, which is equivalent to an acceleration. This inhomogeneity can be assimilated as an apparent acceleration by applying some `flattening' or spatial projection. In particular, for small regions, a final intrinsic comoving distance r̂' can be defined by an α-distorting stereographic projection <cit.>, r' ↦ r̂' = r'/(1-γ(r')/γ_0)^α , t ↦ t̂ = t/1-γ(r')/γ_0 , where γ = γ(r') := sin^-1(r'/t_0) is the angular comoving coordinate, γ_0^-1∈ (0, 1) is a projection factor, and α = 1/2 is a distortion parameter, which is fixed according to symplectic symmetries <cit.>. Locally, for empty spacetimes, it is expected that γ_0 ≈ 2; which is compatible with the fitted value of γ_0 = 1.6^+0.4_-0.2 when Type Ia SNe observations are used <cit.>. In summary, the projection factor γ_0 depends on a projective angle γ_sys such that γ_0 = γ_sys/cos(γ_sys) ≥ 1, where γ_0 = 2 corresponds to a total empty projective angle of γ_sys≈π/3, and γ_0 = 1 is the minimum projection angle allowed by the causality relationship of the arc length γ_0 t_0. Therefore, the projective angle for an empty or almost empty neighborhood is approximately γ_neigh = (0.284 ± 0.049)π≲π/3 =: γ_U. §.§ Perturbation by gravitationally bound systems In the case of an (unperturbed) homogeneous universe, the linear expansion of H^4 can be expressed in terms of the vacuum energy density ρ_vac(t) = 3/(8πGt^2), where G is the Newtonian gravitational constant, and thus ρ_vac(t_0) = ρ_crit. That is, one can define an inactive (vacuum) mass or energy ℳ(r) = ρ_vac4/3π r^3 for a distance equal to r with respect to the reference frame origin. Using the relationship between the original coordinates (dt, dr, r dΣ) and the comoving ones (dt, a/(t) dr', a/(t)r'dΣ), the spatial dependence of the metric is now r'^2/t_0^2 = r^2/t^2 = 2Gρ_vac4/3π r^3/r = 2Gℳ(r)/r = v_H^2(r) , where v_H(r) := r/t is the Hubble speed, which coincides with the escape speed of the empty spacetime with vacuum density ρ_vac. A perturbation of the vacuum density ρ_vac→ρ_M(r) := ρ_vac + Δρ, with an effective density Δρ at r > 0, leads to a system mass M_sys := 4/3π r^3 Δρ that is likewise obtained by perturbing the curvature term, r^2/t^2→r^2/t_sys(r)^2 := r^2/t^2 + 2GM_sys/r = v_H^2(r) + v_E^2(r) , with a radius of curvature t_sys(r) ∈ (2GM_sys, t], where v_E(r) := √(2GM_sys/r) is the classical escape speed (Eq. <ref>). An approximation to the Schwarzschild solution can be obtained in a flat five-dimensional ambient space from the hyperconical metric. For example, let (t, r⃗, u) := (t, x, y, z, u) ∈ℝ_η^1,4 be Cartesian coordinates, including an extra spatial dimension u in the five-dimensional Minkowski plane. As used in hyperconical embedding, u := t cosγ - t is chosen to mix space and time. Now, it includes a gravity field with system mass M_sys integrated over a distance r̂ such that sin^2 γ := r^2/t^2↦r̂^2/t^2 + 2GM_sys/r̂. Notice that r is a coordinate related to the position considered, in contrast to the observed radial distance r̂ or its comoving distance r̂' := (t_0/t) r̂. With this, first-order components ĝ_μν of the metric perturbed by the mass are: ĝ_tt = 2cosγ - 1 ≈ 1 - r̂^2/t^2 - 2GM_sys/r̂ , ĝ_r'r' = - t^2/t_0^21/cos^2γ = - t^2/t_0^2(1 - r̂^2/t^2 - 2GM_sys/r̂)^-1≈ - t^2/t_0^2(1 + r̂^2/t^2 + 2GM_sys/r̂) , ĝ_r't = t/t_0tanγ = t/t_0r̂/t(1 - r̂^2/t^2 - 2GM_sys/r̂)^-1/2 ≈ t/t_0r̂/t + O(r̂^3/2t^3) , ĝ_θθ = - t^2/t_0^2r̂'^2 , ĝ_φφ = - t^2/t_0^2r̂'^2 sin^2θ , where the hyperconical model is recovered taking M_sys = 0. Therefore, assuming linearized perturbations of the metric ĝ_μν = ĝ_μν^back + ĥ_μν with ĝ_μν^back := ĝ_μν|_M_sys= 0, we can find a local approach to the Schwarzschild metric perturbation h|_Schw as follows <cit.>: ĝ_Schw :≈ [η_μν + (ĝ_μν - ĝ_μν^back)] dx^μ dx^ν≈ ≈ (1- 2GM_sys/r̂) dt̂^2 - t^2/t_0^2[(1 + 2GM_sys/r̂) dr̂'^2 + r̂'^2dΣ^2 ] + shift , which is also obtained for ĝ_μν when r̂/t ≪ 1, that is lim_(r̂/t_0) → 0 [ĝ_μν] ≈ĝ_Schw. The shift term is neglected in comparison to the other terms, especially for geodesics. Our result is aligned to the Schwarzschild-like metric obtained by <cit.> for FLRW metrics, specifically for the case of K=0. In summary, the first-order approach of the 5-dimensionally embedded (4-dimensional) hyperconical metric (Eq. <ref>) differs from the Schwarzschild vacuum solution by the scale factor t^2/t_0^2 and by a negligible shift term. Therefore, the classical Newtonian limit of GR is also recovered in the hyperconical model, because the largest contribution to gravitational dynamics is given by the temporal component of the metric perturbation h_tt. That is, the Schwarzschild geodesics are linearized by d^2x^μ/dτ^2≈1/2η^μν∂/∂ x^ν h_tt(dt/dτ)^2 , where ĥ_tt≈ - 2GM_sys/r̂. § MODELING RADIAL ACCELERATION §.§ Projective angles of the gravitational system The last appendix derived a general expression for the anomalous RAR expected for any gravitational system according to the projective angles (which depend on the quotient between escape speed and Hubble flux) under the hyperconical universe framework. From the analysis of perturbations (Eq. <ref>), it is expected that any gravitational system (Eq. <ref>) results in a characteristic scale r_cs(M_sys(r)) := t_sys(r) sinγ_sys(r) given by a projective angle γ_sys∈ [π/3, π/2) that slightly depends on the radial distance r and on the mass M_sys. Unlike gravitational lensing, a non-null cosmic projection γ_0^-1 = γ_sys^-1cosγ_sys > 0 is expected for non-concentrated gravitational systems. In particular, we assume that the maximum projective angle (γ_sys = γ_center := π/2, minimum cosmic projection) is produced by small, dense, and homogeneous gravitational systems, while the minimum angle (γ_sys = γ_U := π/3, maximum cosmic projection) corresponds to large systems extended towards an (almost) empty universe (Fig. <ref>). Since t_sys^2(r) ∈ (4G^2M_sys^2, t^2] and r_cs^2(M) ∈ (4G^2M_sys^2, 3/4t^2], the characteristic scale r_cs^2(M_sys) increases from r_cs^2(M_sys)=t_sys^2(r)=4G^2M_sys^2 ≪ t^2 up to r_cs^2(M_sys)=3/4t_sys^2(r)=3/4t^2 = t^2sin^2γ_U; that is, sinγ_sys^2(r) ∈ [3/4,1]. The relation of γ_sys with respect to the gravitational mass and the scale of speeds can be estimated from the following properties. According to Eq. <ref>, a gravitational system perturbs the cosmological geometry with (squared) escape speed of v_E^2(r) := 2GM_sys/r higher than the Hubble expansion speed v_H^2(r) := r^2/t^2, so the projective angle γ_sys is given by: sin^2 γ_sys(r) = r_cs^2(M_sys)/t_sys^2(r) = r_cs^2(M_sys)/t^2 + 2r_cs^2(M_sys) GM_sys/r^3 = ≈ sin^2 γ_neigh + β^2(r) v_E^2 ∼ sin^2 γ_U - β^2(r)ϵ^2_H v_H^2(r) + β^2(r) v_E^2 , where β^2(r) := r_cs^2/r^2 ≫ 1 is an auxiliary function, γ_neigh := sin^-1(r_cs/t) ∈ (0, γ_U) is a characteristic neighbor angle, and ϵ^2_H := sin^2 γ_U / sin^2 γ_neigh - 5/6 ∝ t^2/r_cs^2 ∝ρ/ρ_vac is a relative density of the neighborhood matter (ρ) with respect to the vacuum density (ρ_vac; Eq. <ref>). So, roughly speaking, it is sin^2 γ_neigh(r) ∼5/6sin^2 γ_neigh(r) = sin^2 γ_U - ϵ^2_H r_cs^2/t^2. On the other hand, the center of the gravitational system presents a higher density, thus the cosmic projection should be minimum due to the maximum projective angle γ_center≈π/2, that is, 1 ≈sin^2 γ_center≈r_cs^2(M_sys)/t_sys^2(r) + 2ϵ_H^2 r_cs^2(M_sys)/t^2∼sin^2 γ_U + β^2(r)ϵ^2_H v_H^2 + β^2(r)v_E^2 . Notice that, for the limit when γ_neigh≈γ_U, it is required that ϵ^2_H ≈1/6 and v_E ≈ 0. The dependency of γ_sys on the auxiliary function β(r) can be removed by taking the quotient of sin^2 γ_sys(r) - sin^2 γ_U (Eq. <ref>) over sin^2γ_center - sin^2 γ_U (Eq. <ref>). Therefore, it is expected that the projective angle γ_sys of every gravitational system presents a general relation similar to sin^2 γ_sys(r) - sin^2 γ_U/sin^2γ_center - sin^2 γ_U∼| v_E^2(r)-ϵ^2_H v_H^2(r)/v_E^2(r) + ϵ^2_H v_H^2(r)| , with two free parameters, ϵ_H ≥1/6 and γ_center≈1/2π. For example, for galaxies and small gravitational systems, the escape speed v_E^2 is strongly related to the Kepler orbital speed v_K^2 ≈ v_E^2/2 ∼ v_E^2, and the projective angle γ_sys(r) ≡γ_galaxy(r) can be estimated by the following galactic relation <cit.>: sin^2 γ_galaxy(r) - sin^2 γ_neigh/sin^2γ_center - sin^2 γ_U∼v_E^2(r)/ϵ^2_H v_H^2(r) + v_E^2(r) , with γ_neigh∼γ_U ≈π/3 and one free parameter, which is ϵ^2_H ≳ 1 if γ_center = π/2 is fixed, or γ_center≲π/2 if ϵ_H = 1 is fixed. Thus, two limiting cases are sinγ_sys≈ 1 ⇒ γ_sys≈π/2 when orbital speed is v_K(r) ∼ v_E(r) ≫ε_H v_H(r), while sinγ_sys≈√(3)/2 ⇒ γ_sys≈π/3 when orbital speed is v_K(r) ∼ v_E(r) ≪ε_H v_H(r), which is the lower limit of the neighborhood projective angle (γ_neigh). On the other hand, radial accelerations (without regular orbits) of large-scale objects such as galaxy clusters are expected to present opposite behavior with respect to Eq. <ref>, since the gravitational center is not a galactic black hole but is close to a dominant galaxy (), and the neighborhood now corresponds to the large-scale environment of the clusters themselves. Therefore, the projective angle γ_cluster of the largest structures is approximated by the following cluster relation: sin^2 γ_galaxy - sin^2 γ_cluster(r)/sin^2γ_galaxy - sin^2 γ_U∼v_E^2(r)/ϵ^2_H v_H^2(r) + v_E^2(r) , where v_E^2(r) is the escape speed of the clusters, γ_center∼γ_galaxy≲π/2 is the averaged projective angle for galaxies and, now, we expect that the projective angle for clusters is a variable γ_cluster(r) ∈ [π/3, π/2), but close to the neighborhood value γ_neigh∼γ_U = π/3. However, a perfectly homogeneous distribution of low-density galaxies in a cluster will lead to a balance between the different galaxies that form it, so the cluster radial acceleration will be approximately zero (v_E ∼ 0) and anomalies are not expected, thus the projective angle will be γ_cluster≈π/2 for both Eq. <ref> and Eq. <ref>; that is, no significant geometrical differences are expected between the external and internal parts of the cluster (see the last case of Fig. <ref>). Conversely, for irregular clusters (v_E ∼ϵ_Hv_H with ϵ_H ≫ 1 in Eq. <ref> or v_E^2 ≳ϵ^2_H v_H^2(r) in Eq. <ref>), the radial acceleration will be very similar to the cosmic expansion (with angle γ_cluster = γ_U = π/3). Notice that, for very inhomogeneous systems (v_E ≫ϵ_H v_H), Eq. <ref> recovers the behavior of high-density galaxies (Eq. <ref>) with γ_cluster(r) = γ_galaxy(r). Moreover, for v_E ∈ (0, ϵ_H v_H), Eq. <ref> behaves in a similar way as in Eq. <ref> as expected. §.§ Cosmological projection of the Schwarzschild metric Henceforth, the constant of light speed c≡ 1 will not be omitted from the equations so we can compare with real observations later. Let λ be the scaling factor of an α-distorting stereographic projection (Eq. <ref>) of the coordinates (r', u) = (ctsinγ, ctcosγ) ∈ℝ^2, used to simplify the spatial coordinates (r⃗', u) ∈ℝ^4 due to angular symmetry. For nonempty matter densities, we contend that γ_sys depends on the escape speed of the gravity system considered. However, the first-order projection can be performed by assuming that the dependence on distances is weak (i.e., with γ_0^-1 = γ_sys^-1cosγ_sys being approximately constant for each case). Thus, the stereographic projection is given by the scale factor λ such as (): λ = 1/1-γ/γ_0≈1+r'/γ_0 t_0c , where γ = sin^-1 [r'/(t_0 c)] ≈ r'/(t_0 c) is the angular position of the comoving distance r' = (t_0/t) r. Therefore, the projected coordinates are r̂' = λ^α r' ≈( 1+α r'/γ_0 t_0c) r , t̂ = λ t ≈(1 + r'/γ_0t_0c) t , At a local scale, the value of α = 1/2 is required to guarantee consistency in dynamical systems <cit.>, but the parameter α is not essential in this work, since only the temporal coordinate is used in our approach below. Applying this projection to the perturbed metric (Eq. <ref>) and obtaining the corresponding geodesics, it is easy to find a first-order approach of the cosmic contribution to modify the Newtonian dynamics in the classical limit, as shown below (Sec. <ref>). §.§ First-order perturbed geodesics Assuming that the projection factor γ_0^-1 = γ_sys^-1cosγ_sys is approximately constant, the quadratic form of the projected time coordinate (Eq. <ref>) is as follows: d t̂^2 ≈(1 + 2 r'/γ_0 t_0c + 2tṙ'/γ_0 t_0c) dt^2 + higher-order terms . By using these prescriptions, our Schwarzschild metric (Eq. <ref>) is expressed in projected coordinates (t̂, r̂') or in terms of the original ones (t, r'); that is, ĝ_Schw = ĝ_μν dx̂^μ dx̂^ν = g_μν dx^μ dx^ν, with ĝ_Schw ≈ (1- 2GM_sys/c^2 r̂) c^2 d t̂^2 - t̂^2/t_0^2 r̂'^2dΣ^2 ≈ g_ttc^2dt^2 + g_ii(dx^i)^2 , and finally, it is locally expanded up to first-order perturbations in terms of γ_0. Notice that, according to Eq. <ref>, the background terms r'^2/t_0^2 do not produce gravitational effects and thus they can be neglected. Here, one identifies a projected perturbation h_tt of the temporal component of the metric, g_tt = η_tt + h_tt = 1 + h_tt, with η_μν = η^μν= diag(1,-1,-1,-1). Thus, if M_sys is assumed to be mostly concentrated in the central region of the gravitional system, the first-order perturbation of the temporal component of the metric is h_tt≈ - 2GM_sys/rc^2(1-α r/γ_0 t c) + 2/γ_0 c(r/t + t/ t_0ṙ') , where the spatial projection r̂≈ (1+α r / (γ_0 ct)) r is considered (from Eq. <ref>), and the relation between comoving distance r' and spatial coordinate r is also used (r'/t_0 = r/t). Under the Newtonian limit of GR, the largest contribution to gravity dynamics is given by the temporal component of the metric perturbation h_tt. That is, Schwarzschild geodesics (Eq. <ref>) produce both time-like and space-like acceleration components from the metric perturbation h_tt, d^2 ŝ/c^2 dt^2≈1/2∂/∂ x^0 h_tt e_t - 1/2∂/∂ x^i h_tt e_i =: a^t e_t + a^ie_i , where the four-position ŝ := (cΔ t, x^i) = c Δ t e_t + x^i e_i =: c Δ𝐭 + 𝐱∈𝐑^1,3 is assumed, with canonical basis {e_t, e_1, e_2, e_3} and dual basis {e^t, e^1, e^2, e^3}. For a freely falling particle with central-mass reference coordinates 𝐱 = x^ie_i = (r, 0, 0) = 𝐫∈ℝ^3, it experiences an acceleration of about d^2 ŝ/dt^2 = a^te_t + a^re_r ≈ (ṙ'/t_0- r + 1/2α r_M/γ_0t^2) e_t - (GM_sys/r^2+c/γ_0 t)𝐫/r ≈ a_N - c/γ_0 t𝐫/r - r/γ_0t^2 e_t , where r_M := 2GM_sys/c^2 ≪ r is the Schwarzschild radius, which is neglected compared to the spatial position r. That is, an acceleration anomaly is obtained mainly in the spatial direction, about |𝐚 - a_N| ≈γ_0^-1c/t for 𝐚 := a^r e_r. However, the total acceleration also has a time-like component, that is, in the direction e_t. In particular, for a circular orbit with radius r, and taking into account the non-zero temporal contribution to the acceleration in the hyperconical universe with radius ct <cit.>, the total centrifugal acceleration is v^2/c^2e_s = -(ct e_t e^t + x^i e_i e^i) d^2ŝ/c^2dt^2≈ ct( r/γ_0c^2t^2) e_t + (GM_sys/c^2r^2+1/γ_0 ct)x^i x_i/r e_i , where e_s is an effective space-like direction (||e_s||^2 = e_se^s = -1), while the absolute value of the velocity is given by v^4/c^4 = - || v^2/c^2e_s||^2 ≈(GM_sys/rc^2)^2 +2GM_sys/γ_0 t c^3⟹ v ≈√(GM_sys/ r) if GM_sys/r^2 >> 2c/γ_0 t =: a_γ 0 v ≈√(2GM_sysc/γ_0 t) if GM_sys/r^2 << 2c/γ_0 t = a_γ 0 , which satisfies two well-known limits of Newton's dynamics and Milgrom's (Eq. <ref> right), where a_0 is the Milogrom's acceleration parameter and M_sys = M_sys(r) is the total mass within the central sphere of radius r. Finally, the velocity curve v=v(r) can be reworded in terms of the Kepler speed v_K := √(GM_sys(r)/r). Therefore, the predicted mass-discrepancy acceleration relation for rotation curves is (v/ v_K)^2 ≈√(1 + 1/|a_N|2c/γ_0 t) ⟹ a_C/a_N≈√(1 + 1/|a_N|2c/γ_0 t) , where a_C = v^2/r is the total radial acceleration and a_N = GM_sys/r^2 is the Newtonian acceleration. However, the absence of rotation in galaxy clusters leads to a radial acceleration similar to Eq. <ref>. In any case, the projection factor γ_0^-1 = γ_sys^-1cosγ_sys depends on the projective angle γ_sys, which can be estimated from the galaxy cluster approach (Eq. <ref>) or from the general model (Eq. <ref>), respectively, as follows: sin^2 γ_cluster(r) ≈ sin^2γ_galaxy - (sin^2γ_galaxy - sin^2 γ_U) v_E^2(r)/ε^2_H v_H^2(r) + v_E^2(r) , sin^2 γ_cluster(r) ≈ sin^2γ_U + (sin^2γ_center - sin^2 γ_U) | v_E^2(r)-ϵ^2_H v_H^2(r)/v_E^2(r) + ϵ^2_H v_H^2(r)| . where γ_center can be fixed to γ_center = π/2 to test the 1-parameter (ϵ_H) general model of Eq. <ref>, while this study assumes that {ϵ_H, γ_galaxy} are free in our 2-parameter model for clusters (Eq. <ref>). Finally, the empty projective angle is usually set as γ_U = π/3 <cit.>, which produces a projection factor of γ_U^-1cosγ_U ≈1/2. §.§ Individual fitting Observed data on the RAR of 10 clusters (0.0328 < z < 0.0899) were collected from the study performed by <cit.>. Individually, fitting of Eq. <ref> for the anomaly between the total spatial acceleration and Newtonian acceleration (Eq. <ref>) leads to a square root of the relative density of about ε_H = 38^+29_-11 (90% confidence level, Fig. <ref>). All these results are obtained by fixing the constants γ_U = π/3 and γ_center = π/2. The general model (Eq. <ref>), with only one free parameter (ε_H), gave good results for eight of the ten clusters, showing difficulties in fitting the more available data from the A2029 and A2142 clusters (Table <ref>). If two parameters are considered (ε_H, γ_center), the results considerably improve except for the A2029 cluster, which requires changing γ_U^-1cosγ_U → 1 to be compatibly fitted to the observations. The same 2-parameter (ε_H, γ_center) general model (Eq. <ref> with γ_U = π/3) was also applied to the 60 high-quality galaxy rotation curves, obtaining an acceptable result for all of them. The case of 1 parameter (ε_H free when γ_center = 0.48π is set) showed a slightly larger chi-square statistic and p-value, but these are also acceptable for all of them. Moreover, an empirical relationship is found between the single parameter ϵ_H and the square root of a relative density, which defines an identity ρ(r_typ)/ρ_vac≅ 1 in units of vacuum density ρ_vac := 3/(8πGt^2) for an observed density ρ(r_typ) that is defined at a typical neighborhood distance of approximately four times the maximum radius (r_typ≈ 4× r_max, fitted at R = 0.85, p-value <0.0001, Fig. <ref> left) for each galaxy rotation curve, and equal to the minimum radius (r_typ≈ r_min) for the data of each cluster. This typical distance corresponds to r_typ≈ 50-200kpc. Finally, when the 2-parameter HMG model is considered for galaxies, an additional relationship is found between ϵ_H and γ_center: cos(γ_center) = cos(0.4610^+0.0013_-0.0014π) - (0.020 ± 0.002)ln(ε_H) for 1 ≤ε_H < 400, with a Pearson coefficient of R = 0.80 (p-value <0.0001, Fig. <ref> right). aasjournal
http://arxiv.org/abs/2405.09706v1
20240515210914
Canonical transformations applied to the non-free Landau electron
[ "Jorge A. Lizarraga" ]
quant-ph
[ "quant-ph" ]
jorge_lizarraga@icf.unam.mx Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México, 62210, Cuernavaca, México The method previously used to solve Schrödinger equation by a unitary transformation for a electron under the influence of a constant magnetic field is used to obtain a non-free Landau electron wave function. The physical meaning of this wave function is discussed based on the conserved properties of the transformed Hamiltonian. Canonical transformations applied to the non-free Landau electron Jorge A. Lizarraga May 20, 2024 ================================================================= § INTRODUCTION Previously, Boon and Seligman has shown how to recover Landau's solution for a particle under the effect of a constant magnetic field, described by the Landau's gauge, using a canonical transformation which correspond to a finite linear symplectic transformation in phase space <cit.>, following the approach studied by Moshinsky and Quesne <cit.>. This result has been called as free Landau electron due to the fact that the wave functions along one axis (depending on the gauge selected) is a plane wave function. However, this description turns out to be counterintuitive, since, classically talking when we restrict the system to the x-y plane, under the influence of a magnetic field one spect to have circular movement which cannot be fully described by plane waves. In this work, I applied the same canonical transformation given on reference <cit.> to show that there is a non-free Landau electron wave function for the same system which is characterized by oscillations in both axis of the plane x-y. Even though, whether at a first glance on the solution one could think that this wave function is a consequence of the degeneration of the system, it comes out that the apparition of this second term in the wave function has its origin on the conserved quantities of the Hamiltonian. § CANONICAL TRANSFORMATION The system Hamiltonian is written as Ĥ=1/2m(p̂+e/cA)^2, where e>0 is the electron charge, m is its mass, c the speed of light, p̂=-iħ∇ is the particle momentum and A=(A_x,A_y,A_z) is the vector potential such that B=∇× A=Bk̂. Hence, the gauge is selected as A=(-By,0,0), which is known as Landau's gauge. Since the interesting part of the particle dynamics is happening inside the plane x-y, due to the fact the coordinate along the z-axis is unaffected by the magnetic field, defining ω_c=eB/mc, the Hamiltonian has the following form Ĥ=1/2m(p̂_x-mω_cy)^2+1/2mp̂_y^2, the method on reference <cit.> can be used to calculate the eigenvalues without having to calculate the eigenfunctions (this method works even without having to specify the gauge used), being the spectrum the Landau levels E_n=ħω_c(n+1/2) where n∈ℕ. Now, we used the same linear transformation used on reference <cit.> which redefines the canonical (operator) variables (x,y,p̂_x,p̂_y) to (operator) variable (Q,Q,P,P) Q=-1/β(p̂_x-β y), Q=-1/β(p̂_y-β x), P=p̂_y, and P=p̂_x, where β=mω_c. The context should always be clear.Then the following commutation relations can be calculated [Q,P]=[Q,P]=iħ and [Q,Q]=[P,P]=[Q,P]=[Q,P]=0, warranting that the new variables are canonical ones (expression (2.1) in reference <cit.>). The Hamiltonian Eq. (<ref>) is rewritten with this coordinates as Ĥ=1/2mβ^2Q^2+1/2mP^2. To recover Landau's solution <cit.> in therms of the variables x and y the Moshinsky and Quesne formula was used <cit.> ψ(x,y)= β/2π∫_-∞^∞∫_-∞^∞exp[iβ/ħ(QQ+xy-xQ-yQ)] ×ψ(Q,Q)dQdQ, where ψ(Q,Q) is an arbitrary function that depends on the variables Q and Q. Due to the independence of the Hamiltonian respect the variable Q the solution can be written as ψ(Q,Q)=Ψ_n((β/ħ)^1/2Q)ϕ(Q), where Ψ_n is the solution of the Harmonic oscillator equation and ϕ=exp(ik/ħQ) <cit.>. Under this consideration, the solution obtained is ψ_n,k(x,y)=exp(ik/ħx)Ψ_n(√(β/ħ)(y-k/β)). Hence, Landau's solution was obtained. However, due to the arbitrariness of the function ψ(Q,Q), one can define the expression ψ=Ψ_n((β/ħ)^1/2Q)(exp(ik/ħQ)+δ(Q-k'/β)), where δ is the Dirac delta function. Of course, the first term of the above definition will give the expression Eq. (<ref>). On the other hand, the second term will lead us to a harmonic oscillator solution along the x-axis times a phase, that is ψ_n,k,k'(x,y)=exp(ik/ħx)Ψ_n(√(β/ħ)(y-k/β))+ exp(iβ/ħ(x-k'/β)y)Ψ_n(√(β/ħ)(x-k'/β)). This is the non-free Landau electron wave function, due the fact that the particle description along the x-axis is not longer a plane wave alone, but it also present oscillations along it. § PHYSICAL INTERPRETATION To understand the physical meaning of the expression Eq. (<ref>) it is useful to analyze the conserved quantities of the Hamiltonian Eq. (<ref>). In Heisenberg picture, the evolution of a time-independent operator is given by the expression df̂/dt=1/iħ[f̂,Ĥ], using Eq. (<ref>) and the identities Eq. (<ref>) and Eq.(<ref>) one can calculate the following variations d Q/dt=iħ/mP, d Q/dt=0, d P/dt=-iħ/mβ^2Q, and d P/dt=0. The above equalities tell us that this system has two conserved quantities, besides the Hamiltonian itself, Eq. (<ref>) and Eq.(<ref>), this is in fact expected since the Hamiltonian is independent of the quantities (Q,P). This result tell us that d p̂_x/dt=0, and d /dt(p̂_y-β x)=0, are constant of motion. The first one of the above conserved operator is the one that was initially considered by Landau to obtain his solution <cit.>, by solving the eigenvalue equation Pψ=kψ, and substituting it in the Hamiltonian, one gets the wave function Eq. (<ref>). On the other hand, one can proceed similarly with the conserved operator Q, writting down a eigenvalue expression Qψ=-β^-1 k'ψ, substitute the result in the Hamiltonian and the solution one gets is the second term of the expression Eq. (<ref>). One needs to mention that the operator Eq. (<ref>) has been ignored so far. An insight of the physical meaning of the conserved quantities can be obtained when one analyze the characteristics of the Lorentz force being applied to the particle due to the magnetic field. Such force is defined as F=-e/c v× B, using it to express the coordinate representation of the force in the plane x-y, one can obtain the following conserved expressions d/dt(mdx/dt+β y)=0, and d/dt(mdy/dt-β x)=0. In the Hamiltonian formalism the Newtonean momentum defined as p=m v does not necessary coincides with the momentum defined by the Hamiltonian. In fact, using the identities Eq. (<ref>), Eq. (<ref>), Eq. (<ref>) and Eq.(<ref>) is not difficult to prove that, for this particular gauge selection, one has that p_x=mdx/dt+β y, and p_y=mdy/dt, Hence, the conserved expressions Eq. (<ref>) and Eq.(<ref>) automatically becomes Eq. (<ref>) and Eq. (<ref>). This means that, in order to fully describe the electron dynamics, the two above conserved quantities are needed. This results are also valid for the operators representation of the momentum. § DISCUSSION Using the same method developed by Moshinsky and Quesne, and applied by Boon and Seligman, it was shown how to obtain a wave function that describes oscillations along both axes of the plane, and therefore represents a non-free electron. However, the authors referenced in <cit.> are aware that this method can produce oscillations in any direction of the plane. This argument refers to the possibility of having different wave functions due to the selection of different gauges. For instance, if one were to choose the alternative Landau gauge A=(0,Bx,0), it would lead to a solution with oscillations along the x-axis instead of the y-axis. However, the situation with the wave function Eq.(<ref>) turns out to be different, since it does not depend on the selection of the gauge used. Instead, it is a consequence of the conservation properties of the system and is necessary to fully describe it. In fact, due to the conservation properties Eq. (<ref>) and Eq.(<ref>), it is possible for the alternative Landau's gauge and the symmetric gauge, to find two conserved canonic momentum describing oscillations along each axis of the plane. § ACKNOWLEDGMENT The author is thankful to Thomas Seligman for suggesting a review of his work and for the insightful discussions explaining it. apsrev4-1
http://arxiv.org/abs/2405.08711v1
20240514155152
Data-driven Force Observer for Human-Robot Interaction with Series Elastic Actuators using Gaussian Processes
[ "Samuel Tesfazgi", "Markus Keßler", "Emilio Trigili", "Armin Lederer", "Sandra Hirche" ]
cs.RO
[ "cs.RO", "cs.LG", "cs.SY", "eess.SY" ]
Advancing electron injection dynamics and mitigation approaches in the Electron-Ion Collider’s swap-out injection schemeWork supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy Derong Xudxu@bnl.gov, Ferdinand Willeke, Michael M. Blaskiewicz, Yun Luo, Christoph Montag Brookhaven National Laboratory, Upton, NY, USA 14th May 2024 ================================================================================================================================================================================================================================================= empty empty Ensuring safety and adapting to the user's behavior are of paramount importance in physical human-robot interaction. Thus, incorporating elastic actuators in the robot's mechanical design has become popular, since it offers intrinsic compliance and additionally provide a coarse estimate for the interaction force by measuring the deformation of the elastic components. While observer-based methods have been shown to improve these estimates, they rely on accurate models of the system, which are challenging to obtain in complex operating environments. In this work, we overcome this issue by learning the unknown dynamics components using Gaussian process (GP) regression. By employing the learned model in a Bayesian filtering framework, we improve the estimation accuracy and additionally obtain an observer that explicitly considers local model uncertainty in the confidence measure of the state estimate. Furthermore, we derive guaranteed estimation error bounds, thus, facilitating the use in safety-critical applications. We demonstrate the effectiveness of the proposed approach experimentally in a human-exoskeleton interaction scenario. § INTRODUCTION Robots are increasingly deployed in application scenarios, where physical interaction with a human operator is required. In order to achieve safe interaction forces, robots with series elastic actuators (SEAs) are a popular choice and have been employed in safety-critical domains such as physical human-robot interaction <cit.>. Since SEAs decouple motor and load-side through an elastic spring, compliance is inherently guaranteed. Moreover, the elongation of the spring is often used as a measure of interaction force <cit.>, which is particularly useful in rehabilitation robots for instance to quantify the patient's participation, e.g., in assist-as-needed schemes <cit.>. However, while the spring elongation is readily available in SEAs, it may not reflect an accurate measure of the external force if dynamic effects are acting on the load-side <cit.>. Furthermore, it is challenging to isolate specific contributions to the spring interaction torque, if multiple forces are acting on the SEA load-side, e.g., during human-robot co-manipulation <cit.>. Thus, the use of external force observers is proposed to mitigate these issues. A widely used approach is the Kalman filter (KF), which allows to consider uncertainty in the estimation by adopting a Bayesian framework <cit.>. In <cit.> for instance, a KF is used to estimate the generalized momentum of an upper-limb exoskeleton to infer human interaction forces. Another work explicitly exploits additional information encoded in the spring torque of a SEA, by augmenting the state vector by the external load-side torque and utilizing the KF as a full-state observer for estimation <cit.>. However, while the KF allows to consider modelling errors through process noise, inaccurate dynamics models still lead to a deteriorating quality of the estimation <cit.>. This is especially prevalent in physical human-robot interaction, as the dynamics are strongly influenced by the human, thus, requiring adaption to the individual user. =-1 A common approach to achieve this adaptation relies on learning a model of the unknown dynamics, which can in turn be used in a learning-augmented, model-based observer. Gaussian process (GP) regression has become popular for learning such models in recent years since it exhibits strong theoretical guarantees such as an inherent variance-bias trade-off, a high data-efficiency and a strong expressivity <cit.>. Moreover, it explicitly provides an uncertainty representation and admits the derivation of prediction error bounds <cit.>, which is particularly useful in safety-critical applications. While GPs have been employed in the context of momentum observers to estimate external forces <cit.>, to the best of the authors' knowledge there is no work using GPs in a KF framework to derive a force observer for robots with series elastic actuators. The general approach of combining GP regression and Bayesian filtering (GP-BayesFilter) is introduced in <cit.>, where the motion model of a robotic blimp is learned using GP regression. However, existing GP-BayesFilters lack theoretical guarantees on the achieved estimation error, which are vital in human-robot interaction. In this work, we propose a novel approach for designing a learning-augmented force observer by combining GP regression with Bayesian filtering to learn residual load-side dynamics of a SEA robot and augmenting the KF prediction model accordingly. Differently to <cit.>, we do not learn a state transition residual directly, but instead propose a data-generation architecture that allows to infer residual dynamics instead, which additionally lends itself for online model inference. Finally, we address the lack of theoretical estimation guarantees of GP-BayesFilters by proposing a novel approach of combining GP prediction error bounds with ellipsoid set bounds of KFs <cit.>. We demonstrate the effectiveness of the proposed approach in human-exoskeleton experiments of a SEA robot with one degree of freedom. § PROBLEM STATEMENT We consider the physical interaction between a human operator and an elastic joint robot, e.g., an exoskeleton with series elastic actuators (SEAs). The dynamics of such a rigid link robot with n∈ℕ elastic joints is described by a set of Euler-Lagrange equations[Notation: Lower/upper case bold symbols denote vectors/matrices, ℝ_+/ℕ_+ all real/integer positive numbers, I_n× n the n× n identity matrix, 0_n× n the n× n matrix with zero entries and ⊕ the Minkowski sum. We denote an ellipsoidal set with E(c,X) with center c and shape matrix X define the set as E(c,X)= {x∈ℝ^n | (x- c)^T X^-1 (x- c) ≤ 1 }. ] <cit.> M(q)q̈ + C(q,q̇)+ N(q,q̇) + τ_s(θ_s,θ̇_s) = τ_ext J(θ_m)θ̈_m + D_m θ̇_m - τ_s(θ_s,θ̇_s) = τ_m, where (<ref>) describes the load-side dynamics and (<ref>) the motor-side dynamics. Here, q∈ℝ^n represents the joint angles, θ_m ∈ℝ^n represents the motor angles, M(q)∈ℝ^n× n is the inertia matrix of the rigid link, J(θ_m)∈ℝ^n× n is the inertia matrix of the motors, C(q,q̇)∈ℝ^n× n denotes Coriolis, centrifugal and gravitational terms on the load side, D_m ∈ℝ^n× n denotes the motor damping matrix and N(q,q̇)∈ℝ^n represents other, lumped nonlinear effects on the load-side such as friction. The column vector τ_m describes the torque inputs provided by the motors, while τ_ext corresponds to any external torques acting on the load side, e.g., environmental disturbances or human torques. Both load and motor side are coupled through the elastic transmission torque =-1 τ_s(θ_s,θ̇_s) = K_s(θ_s,θ̇_s) θ_s + D_s(θ_s,θ̇_s) θ̇_s, which is dependent on the spring deformation θ_s=q-θ_m with nonlinear stiffness K_s ∈ℝ^n× n and damping matrix D_s ∈ℝ^n× n. =-1 The goal is to estimate the actively generated torques of the human operator acting on the load-side of the elastic joint robot. To this end, we decompose the external torques τ_ext into passive torques τ_h,pas(q,q̇,q̈) ℝ^n×ℝ^n×ℝ^n →ℝ^n due to inertial, gravitational and viscoelastic torques of the human limb and active torques τ_h,act∈ℝ^n generated by volitional muscle activity, i.e., τ_ext= τ_h,pas+τ_h,act. Since the precise identification of the parameters governing the human passive dynamics is challenging, we merely assume that an approximate model of τ_h,pas is available, e.g., using anthropometric tables <cit.>. Also, we assume that the nonlinear effects N(q,q̇) on the load-side are unknown. This restriction reflects the fact that it is generally difficult to model effects such as friction using parametric approaches. Thus, we can lump all uncertain dynamics components into the unknown function f(z)=N(q,q̇)-τ_h,pas(q,q̇,q̈), where z[q^⊺ q̇^⊺ q̈^⊺]^⊺ is the concatenation of joint angles, velocities and acceleration, such that we can write (<ref>) as M(q)q̈ + C(q,q̇)+ f(z) + τ_s(θ_s,θ̇_s) = τ_h,act, To obtain an accurate estimate of τ_h,act despite the residual dynamics error, we assume access to the following measurements to infer a model of f(·). =-1 The motor torque τ_m, the load-side kinematics {q, q̇, q̈} and the motor kinematics {θ_m, θ̇_m, θ̈_m} are available for model inference. The motor torque τ_m is directly computable from the applied current and motor constant. Moreover, SEAs are typically equipped with encoders on load- and motor-side such that q and θ_m are measurable and angular velocities and accelerations can be obtained through numerical differentiation. Note that we do not assume any force/torque measurements on the load-side, which would require expensive sensors that additionally often suffer from measurement noise. Based on the above, we consider the problem of estimating the active human torque τ_h,act using a model-based observer. Since the estimation of τ_h,act requires an accurate model of the human-robot system, we learn the unknown dynamics component, including the human passive dynamics, and augment the observer with the inferred f(·). Finally, due to the safety-critical application scenario, guarantees for the estimate τ̂_h,act should be obtained. We aim to derive guarantees in the form of probabilistic estimation error bounds Pr(|τ̂_h,act-τ_h,act|≤ρ) ≥ 1-δ, with the upper bound ρ∈ℝ^+ evaluated for a probability level determined by δ∈(0,1). The bound (<ref>) represents the confidence in the estimated torque τ_h,act and ensures reliable use in physical human-exoskeleton interaction. Remark: While we consider the estimation of active human torques in this work, the proposed approach is not limited to this application and may also be employed as a general external force observer. In this case, the unknown dynamics are primarily introduced by unmodelled component on the load-side of the robot, e.g., due to nonlinear friction. Thus, no decomposition of the external force τ_ext is required. § PRELIMINARIES To estimate the active human torque, we employ Bayesian filtering together with GP regression. Fundamentals of state estimation using Bayesian filtering are introduced in <Ref>, before explaining GP regression in <Ref>. §.§ State Estimation using Bayesian Filtering A well-studied approach to estimate non-measurable quantities of interest is by means of Kalman filter (KF). While the KF was first developed for linear systems <cit.>, several extensions to nonlinear systems, e.g., the extended KF (EKF) <cit.>, have been proposed since. Due to the nonlinear dynamics of human-exoskeleton system, we adopt the EKF framework in the following. Consider a nonlinear system x_k+1 = f_k(x_k,u_k) + w_k y_k = Hx_k+ v_k at time steps k ∈ℕ^+ with system states x∈ℝ^d, control input u∈ℝ^r and measurement vector y∈ℝ^m as well as nonlinear state transitions fℝ^d×ℝ^r→ℝ^d and linear observation model H∈ℝ^m× d. The process w_k∼N(0,Q_k) and measurement noise v_k∼N(0,R_k) follow a Gaussian distribution with variance Q_k∈ℝ^d× d and R_k∈ℝ^m× m, and are assumed uncorrelated and white, i.e., 𝔼[v_k, w_k] = 0 ∀ k, 𝔼[v_k, v_j] = 0 ∀ k ≠ j and 𝔼[w_k, w_j] = 0 ∀ k ≠ j. Note that in the considered SEA scenario, the measurement model H is typically linear, thus, (<ref>) does not pose a restriction. The EKF provides estimates in the form of a mean state estimate x̂_k together with an uncertainty description through the covariance P_k∈ℝ^d× d. In particular, the EKF follows a Bayesian approach where a prior distribution, i.e., prior mean x̂_k^- and covariance P_k^-, are predicted using the dynamics model (<ref>) and the previous state estimate x̂_k-1, which are then updated by conditioning the prior distribution on measurements y_k to obtain a posterior distribution x̂_k and P _k. The complete procedure is shown in <Ref>. An advantage of the KF framework is that it permits an elegant uncertainty propagation from the prediction to the update step through the Kalman gain K_k (Alg. <ref>, line 8). When process noise Q_k is dominant, i.e., high model uncertainty, the prior covariance P_k^- becomes large, resulting in a high gain K _k and giving more weight to the measurement y_k. Conversely, when measurement noise R_k dominates, the gain K _k is low and less weight is put to the measurement y_k. Thereby, the KF provides a convenient framework to include local model uncertainty. Thus, if the deployed prediction model (<ref>) is augmented by a learned component, it can be beneficial to explicitly include the uncertainty of the inferred model in the estimation. While the process noise covariance Q may be naively set to a constant values determined by a global measure of learning error as in <cit.>, this approach cannot take local model uncertainties into account, which typically arises due to inhomogenous data distributions. On the other hand, learning techniques that quantify local model uncertainty facilitate dynamically updating the process noise covariance Q_k based on the local confidence of the learned model at each step k. A prominent learning method that quantifies uncertainty is Gaussian Process regression introduced in the following. §.§ Gaussian Process Regression Gaussian process (GP) regression is a modern machine learning technique basing on the assumption that any finite number of evaluations {f(x^(1)) … f(x^(N))}, N∈ℕ_+ of an unknown function f:ℝ^ρ→ℝ, ρ∈ℕ, at inputs x∈ℝ^ρ follows a joint Gaussian distribution <cit.>. This distribution is defined using a prior mean function m:ℝ^ρ→ℝ and a covariance function k:ℝ^ρ×ℝ^ρ→ℝ yielding the compact notation GP(m(x),k(x,x')). The prior mean function is commonly used to incorporate prior such as an approximate model and is commonly set to 0, which we also assume in the following. The covariance function encodes more abstract information such as differentiability of the unknown function. Due to its infinite differentiability, the squared exponential kernel k(x,x')=σ_f^2exp(-∑_i=1^ρ(x_i-x_i')^2/2l_i^2), is commonly employed as covariance function, whose shape depends on the signal standard deviation σ_f∈ℝ_+ and the length scales l_i∈ℝ_+. Together with the target noise standard deviation σ_on∈ℝ_+, they form the hyperparameters θ=[ σ_f l_1 … l_ρ σ_on ]^⊺, which are commonly obtained by maximizing the log-likelihood <cit.>. Given the hyperparameters, the posterior distribution conditioned on training data can be calculated under the assumption of training targets perturbed by zero mean Gaussian noise with σ_on^2. It is straightforward to show that this posterior follows a Gaussian distribution with mean and variance μ(x) = y^T(K+σ_on^2I_N )^-1k(x) σ^2(x) =k(x,x)-k^T(x)(K+σ_on^2I_N )^-1k(x), where we define the elements of the kernel vector k(x)∈ℝ^N as k_i(x)=k(x^(i),x). § HUMAN TORQUE ESTIMATION USING GAUSSIAN PROCESSES In order to seamlessly integrate the GP regression into a model-based observer, we propose the filter architecture outlined in <Ref>. It consists of a component responsible for model inference using GPs, which is detailed in <Ref>. The learning component is used to enhance an EKF in <Ref> and guaranteed estimation error bounds for the proposed GP-enhanced observer are derived in <Ref>. §.§ Data Generation and Model Inference In order to account for the unknown dynamics component f(·) in (<ref>), it is necessary to augment the KF by a model inferred from data. While the use of GP models in a Bayes-filter framework by directly learning a state transition residual Δx is discussed in <cit.>, applying this approach to the SEA dynamics leads to a coupling of load-side inertia M(·) and external torques τ_ext. Due to the considered lack of force/torque sensors as stated in <Ref>, this coupling cannot be resolved and the inertia error would persist. To overcome this problem we learn the residual torque f(·) instead. To this end, we rearrange the motor-side dynamics (<ref>) and substitute the spring torque τ_s(·,·) into the load-side (<ref>) yielding the inverse dynamics expression f(z) = τ_h,act + τ_m - (τ_l(z) + J(θ_m)θ̈_m + D_m θ̇_m), where τ_l(z) M(q)q̈ + C(q,q̇). We make following assumption for τ_h,act during the data generation. The human operator is assumed passive during training, i.e., the actively generated torque τ_h,act=0. Similar assumptions are common in learning-augmented observer design <cit.> and are necessary to decompose discrepancies due to modelling errors and external disturbances, which is an inherently ill-posed problem <cit.>. Since the motor torque τ_m, load-side kinematics {q, q̇, q̈} and motor kinematics {θ_m, θ̇_m, θ̈_m} are measurable due to <Ref> and a model of the SEA dynamics is available , the right hand side of (<ref>) can directly be computed given <Ref>. Therefore, it remains to define a sampling rate 1/t, t∈ℝ_+ at which measurements of z are taken and values f(·) are computed, such that a training data set {(x^(k)=z(kt), y^(k)=f(kt)}_k=0^K is aggregated based on the measurements. Using the data set, we can update an independent GP for each target dimension of y^(k), i.e., for each i=1,…,n a GP is updated using a training pair (x^(k),y_i^(k)). In order to employ the GP models in the observer, their predictions and variances are concatenated into a vector μ(x)=[μ_1(x) ⋯ μ_n(x)]^T and σ^2(x)=[σ^2_1(x) ⋯ σ^2_n(x)]^T, respectively. §.§ Learning-based Augmented-State Torque Observer To enhance the prediction of the observer using the learned model, we deploy an augmented-state Kalman filter (AKF), where we define the augmented state vector x∈ℝ^5n as x = [ θ_m^⊺ θ_s^⊺ θ̇_m^⊺ θ̇_s^⊺ τ_h,act^⊺ ]^⊺. Rearranging the motor-side dynamics (<ref>) yields θ̈_m=J^-1(τ_m+τ_s(θ_s,θ̇_s)-D_m θ̇_m). Substituting (<ref>) in (<ref>) and applying θ_s=q-θ_m, we get θ̈_s=M^-1(τ_h,act - f(z) - C(x) -τ_s(θ_s,θ̇_s)) -J^-1(τ_m+τ_s(θ_s,θ̇_s)-D_m θ̇_m). Thus, based on (<ref>) and (<ref>), the system state is driven by the nonlinear dynamics ẋ=f_nom(x,u)-I_fM^-1f(z), where, omitting the dependencies on entries of x for improved readability, we define the nominal dynamics f_nom(x,u)= [ θ̇_m; θ̇_s; J^-1(τ_m+τ_s-D_m θ̇_m); M^-1(τ_h,act-C-τ_s)-J^-1(τ_m+τ_s-D_m θ̇_m); 0_n× 1 ], u=τ_m and I_μ = [0_n 0_n 0_n I_n 0_n]^⊺. Note that (<ref>) imposes zero-order dynamics for the active torque, i.e., τ̇_h,act=0, which is sufficient to model piece-wise constant or slowly varying behavior <cit.>. However, if more information is available, it is straightforward to include it by introducing a torque state ω with arbitrary dynamics ω̇=s(ω). =-1 To flexible adapt to the unknown dynamics, we exploit the learned model and substitute f(·) in (<ref>) with the concatenation of GP predictions μ(·). Thus, we write the nonlinear prediction model of the GP-enhanced AKF (GP-AKF) to =-1 f(x,u) = f_nom(x,u) - I_μM^-1μ(z). Since the prediction model (<ref>) is nonlinear, the Jacobian of f(·,·) is needed to apply the EKF algorithm. It is straightforward to compute the linearization for the nominal dynamics model f_nom(·,·) as demonstrated in <cit.>. For the Jacobian of the GP predictions we obtain .∂μ/∂x| _x̂ = (∂k(x)/∂x)^⊺(K+σ_on^2I_N )^-1y, which is easily computable for common kernel choices, such as the squared exponential kernel (<ref>), since ∂ k(x^(i),x)/∂ x^(i) = x-x^(i)/l^2 k(x^(i),x). Then, introducing the notation F_nom∂f_nom/∂x|_x̂ and F_GP∂μ̃/∂x|_x̂, we get the linearized GP-AKF dynamics F(x,u)=F_nom(x,u)-I_μM^-1F_GP(x,u), In order to additionally leverage the inherent uncertainty quantification provided by the GP, we incorporate the GP posterior variance σ^2(·) in the EKF process noise covariance Q(x) = Σ_nom+Σ(x), where Σ_nom denotes the process noise covariance of the nominal model and Σ(·) the covariance induced by the GP-model =-1 Σ(x)=I_μM^-1diag(σ^2(x))M^-⊺I^⊺_μ. Finally, we apply a time-discretization procedure, e.g., as described in <cit.>, to obtain the time-discrete, nonlinear GP-enhanced AKF. The resulting method is shown in <Ref>, where, with a slight abuse of notation, we denote the time-discretized system using the same symbols and time-step k. In comparison to the EKF in <Ref>, our proposed GP-AKF uses the learning-augmented model (<ref>) to predict the prior estimate x̂_k^- (Alg. <ref>, line 7) and exploits the uncertainty quantification of the GP in the covariance estimate (Alg. <ref>, line 8-10). Note that using the GP-augmented covariance Q_k (Alg. <ref>, line 9) to update the KF covariance P_k^- allows us to obtain an observer that explicitly considers local model uncertainty in the confidence measure of the state estimate. However, for the prediction of the prior covariance P_k^- (Alg. <ref>, line 10) the linearized model F_k is used to retain a Gaussian distribution. Thereby, an error is introduced and the estimated covariance does not represent the true covariance in general. Nevertheless, it is possible to derive guaranteed estimation error bounds for the proposed GP-AKF method, which we demonstrate in the following section. §.§ Guaranteed Error Bounds on the Estimated Torque In order to derive estimation error bounds for the learning-based observer, we integrate the confidence sets of the GP prediction into the KF framework. To this end, we adopt the elliptical set-membership approach <cit.>, where the system state x is expressed as a set of Gaussian distributions {x∼N(x+d,Σ) | d∈E(0,X)⊂ℝ^d}, with mean x and unknown but bounded perturbation d described by the positive semi-definite shape matrix X∈ℝ^d× d. Thus, the set (<ref>) is generated by a set of Gaussian distributions with covariance Σ, which are centered around a set of means X = E(x,X). Furthermore, the confidence set of a Gaussian distributed random variable ξ∼N(ξ̅,C) for a probability level δ∈(0,1) is defined by an ellipsoid E(ξ̅,sC) such that Pr(ξ∈E(ξ̅,sC))≥ 1-δ where the scalar s is determined by the chi-square distribution for the selected probability level <cit.>. Thus, based on (<ref>), an ellipsoidal confidence set C can be obtained for x C = E(x,X) ⊕E(0,sΣ), satisfying Pr(x∈C)≥ 1-δ. To derive a confidence set C for our GP-AKF, we utilize the fact that GP regression provides guaranteed error bounds on the regressed mean function =-1 Pr(|f_i(x)-μ_i(x) | ≤βσ_i(x), ∀x∈𝕏) ≥ 1-δ, where 𝕏⊂ℝ^n can be an arbitrary compact set and β∈ℝ_+ is a constant <cit.>. To integrate the GP error bounds in the KF set-membership approach, we first reformulate the interval bound (<ref>) to an ellipsoid. The interval bound |f_i(x)-μ_i(x) | ≤η for every η∈ℝ_+ is equivalent to the one-dimensional ellipsoid inclusion f_i(x)∈E(μ_i(x),η^2). This can be seen directly by squaring the interval bound and dividing by the squared error bound, resulting in (f_i(x)-μ_i(x)) η^-2 (f_i(x)-μ_i(x)) ≤ 1, which is exactly an ellipsoid E(μ_i(x),η^2) centered around μ_i(x) with shape parameter η^2, concluding the proof. When considering multi-dimensional prediction with n independent GPs for each target dimension, as in <Ref>, it is straightforward to enclose the n one-dimensional ellipsoids by one n-dimensional ellipsoid. To this end, we define the pair of mean vector μ_i = [0 … μ_i … 0]^⊺ and shape matrix X_i=diag([0 … η_i^2(κ,x) … 0]). Then taking the Minkowski sum of the n ellipsoids defined by the pairs {μ_i, X_i} yields an outer approximation E(μ,X) ⊇⊕_i=1^n E(μ_i,X_i), with shape matrix X that bounds the error of the GP prediction μ with probability 1-δ. Hence, using the elliptical set-membership formalism (<ref>) together with the ellipsoidal GP error bound (<ref>), the error sources of the GP-AKF can be decomposed for each source separately to derive a novel error bound for the overall approach. The derivations of the estimation error bounds is presented in the following result. Consider a nonlinear system x_k+1 = f_k(x_k, u_k) + w_k + d_k, y_k = Hx_k+ v_k where f(·, ·) is a GP-enhanced prediction model, H is the linear measurement model, w_k∼N(0,Q_k) and v_k∼N(0,R_k) denote process and measurement noise and d_k represents the GP prediction error bound satisfying Pr(d_k ∈E(0,X), ∀x∈𝕏) ≥ 1-δ. Then, given a previous estimate x̂_k and a confidence set C_k= E(x̂_k,X̂_k) ⊕E(0,sP_k) satisfying x_k∈C_k, where X̂_k parameterizes the ellipsoidal set of means of the previous time step, applying <Ref> yields a posterior estimate x̂_k+1 with a confidence set C_k+1 containing the true state x_k+1 with probability Pr(x_k+1∈C_k+1)≥ 1-δ. The first-order approximation of f(·, ·) yields f_k(x_k,u_k)= f_k(x̂_k, u_k) + A_k(x_k-x̂_k) + B_ku_k, with linearization A_k∈ℝ^d× d and B_k∈ℝ^d× r, resulting in the linear state propagation for the EKF prediction x_k+1 = f_k(x_k,u_k) + w_k + d_k. Since the current confidence set x_k∈C_k is given, we can bound the linearization error ε_k+1=x_k+1 - x_k+1 to the set {f_k(x_k, u_k) - f_k(x_k,u_k) | x_k∈C_k}, which is independent of w_k and d_k since they cancel out. Following the derivations in <cit.>, the linearization error can bounded by an over-approximating ellipsoid denoted as such ε_k+1∈E(0,X_k^f), where the elements in X_k^f are computed by taking the maximum error in each dimension seperately <cit.>. Finally, we propagate the previous set of means E(x̂_k,X̂_k) using the linearized model (<ref>), which, due to ellipsoids permitting affine transformations <cit.>, yields the set of prior predictions E(x_k+1^-,X_k+1^-) = E(f_k(x̂_k, u_k) + B_ku_k, A_kX̂_kA_k^⊺) _linear propagation of E(x̂_k,X̂_k) ⊕E(0,X)_ GP prediction error⊕E(0,X_k^f),_linearization error where we bound the errors due to the GP prediction and linearization using (<ref>) and (<ref>). After computing the prior covariance P_k+1^- as described in <Ref>, inserting the observation model H yields the Kalman gain K _k+1=P _k+1^- H^⊺( HP _k+1^- H^⊺ +R_k+1)^-1 Thus, the filtering step is carried out on the prior prediction set E(x_k+1^-,X_k+1^-) to obtain a set of posterior means E(x̂_k+1,X̂_k+1)=(I-K_k+1H) E(x̂^-_k+1,X̂^-_k+1) + K _k+1y _k+1 Finally, with posterior covariance P_k+1=(I-K_k+1H)P _k+1^-, we obtain a confidence set C_k+1 = E(x̂_k+1,X̂_k+1) ⊕E(0,sP_k+1), which, given an appropriate choice of scalar s determined by the probability level δ, contains x_k+1 with probability Pr(x_k+1∈C_k+1)≥ 1-δ, concluding the proof. Intuitively, <Ref> propagates the confidence set C_k for the Gaussian distributed state x_k by linearly propagating the set of means E(x̂_k,X̂_k) using a first-order linearization and separately considering the maximum impact of neglected nonlinearities and GP prediction errors as additive and bounded perturbations. Since each term is represented by an ellipsoid, an outer-approximating ellipsoid can be obtained, which represents the propagated set of means. By additionally propagating the covariance matrix, stochastic uncertainties due to process and measurement noise are considered, thus, allowing us to obtain a probabilistic bound for the posterior estimate. Note that while we derive the estimation guarantees for a linear measurement model, it is straightforward to extend the procedure to nonlinear observation models by again applying a lineariztion and bounding the linearization error using over-approximating ellipsoid. The overall scheme including the decomposition of error sources and uncertainty propagation is illustrated in <Ref>. <Ref> demonstrates that our proposed method permits error bounds for the estimated augmented-state, which in our considered application scenario translates to error bounds on the estimated active human torque. Thus, the GP-AKF is well-suited for use in conjunction with down-stream component, e.g., cooperative controllers, where reliable estimates are essential to guarantee safety of the interaction. =-1 § EXPERIMENTAL EVALUATION For the evaluation of the proposed learning-based force observer, we perform a human-robot interaction experiment with a one DoF SEA elbow-exoskeleton introduced in <Ref>. First, we demonstrate that the model inference correctly learns the human passive dynamics in <Ref>. Then our proposed method is successfully applied to estimate dynamic torques exerted by a subject in <Ref>. §.§ Human-Exoskeleton Interaction Experimental Setup The experiments are executed on an one DoF version of a SEA-driven shoulder-elbow exoskeleton introduced in<cit.>. The actuation unit is equipped with a DC brushless motor, a 1:100 reduction stage and a custom torsional spring running with a sampling rate of 100Hz for data aggregation. The load side angle q and motor side angle θ_m are measured using two 19-bit absolute encoders and the motor torque τ_m is computed from the control input and torque constant. Angular velocity and acceleration are obtained by numerical differentiation of the encoder signals together with a moving average filter. All experiments are performed on an Intel i5-10500 with 3.1GHz running Matlab2021b and LabVIEW2018. =-1 In our setup, the exoskeleton is attached to the subject's forearm with a cuff, while the torso is strapped to a fixed frame. Thereby, the participant's shoulder joint is fixed in place, gravitational torques of the upper arm are removed and only elbow flexion/extension movements can be performed. Each experiment consists of an online training phase on the task followed by an estimation phase. During the training phase the subject is instructed to remain passive, i.e., not resist or support the robot by intentional muscle activation. To minimize inconvenient calibration time, we perform online learning of the unknown dynamics during the training phase. Additionally, due to the intrinsic compliance properties and torque limits of the SEA, it is guaranteed that the interaction forces applied to the human arm remain in safe regions. §.§ Accuracy with Human Passive Dynamics Online Learning First, we demonstrate online model inference of the human passive dynamics and the principle applicability of <Ref>. To this end, we perform training and estimation on the same trajectory and instruct the participant to remain passive during both stages. Thus, given an accurate model inference during training, the estimated human torque is exactly 0. In particular, we use sequences of sigmoid functions as reference trajectory to generate data due to their minimum-jerk profile, which are tracked by a PID position controller. Each sequence starts at an initial position 10 and moves to the target position 75 in 3.5s. Repeating this sequence forwards and backwards 6 times yields a total of N=2100 training samples. For online learning we use the locally growing random tree of GPs (LoG-GP) approach <cit.> with a maximum number N̅=100 training samples per local model and the gradient-based method from <cit.> to optimize the hyperparameters of the GPs online during training. For our GP-AKF method, we determine the measurement noise matrix using the autocovariance least-squares method <cit.> to R= diag(0.0461, 3.6·10^-6, 0.1288, 1.01·10^-5), while we set the nominal process noise to Σ_nom= diag(10^-1, 10^3, 10^2, 10^-9, 10^-2). The initial state x̂_0 is measured and accordingly the initial covariance P_0 is set to the measurement noise covariance R. We compare our proposed GP-enhanced torque observer to a standard augmented-state KF (AKF) <cit.> and to the spring torque τ_s(·,·) (<ref>), which is frequently used as a measure of the interaction torques <cit.>. The estimated active human torques τ_h,act for all methods are depicted in <Ref>. Furthermore, a quantitative comparison of the estimation accuracy is provided in <Ref>, where the achieved root mean square error (RMSE) is shown. It is clearly visible that both the AKF method and spring torque τ_s(·,·) fail to provide an accurate estimate of the active human torque. For the AKF this is primarily due to static friction effects that are not accounted for by the nominal dynamics model, thus, inducing large jumps in the active torque at time steps coinciding with turning points of the trajectory. While the spring torque is not effected by this friction, it still produces inaccurate results due to the non-negligible load-side dynamics. Finally, our proposed approach is able to recover an accurate estimate of the actively applied human torque during estimation, which implies that a correct model of the human passive dynamics was learned during the online training phase. Moreover, it can be seen that the uncertainty of the GP-AKFs shrinks quickly from the conservative initial value to a tight region around the estimate and indicates high confidence in the learned dynamics. The accurate estimation implies that the online model inference from <ref> not only successfully learns the passive human model but even additional unmodelled effects such as friction. RMSE for the estimated active human torque τ̂_h,act for a passively acting subject. GP-AKF AKF spring torque RMSE [Nm] 0.067 4.175 1.359 §.§ Estimation of Dynamic Human Torques To demonstrate the accurate estimation of dynamic torques exerted by the subject, we design an experiment where active and passive human torques can be decomposed such that the ground truth is available. In particular, the task is performed in a static position, e.g., q_0=10, while the exoskeleton is in open loop torque-control. The motor-torque is set to τ_m = τ_des + τ_comp, with τ_comp configured to compensate the static gravity and friction torques of the human-exoskeleton system and τ_des represents the desired active torque. During training stage τ_des=0, thus, no joint movement is induced, since τ_comp compensates for all load-side dynamics. However, during the estimation phase, a torque profile τ_des = (-2·cos(2π f· t) + 2 ), with frequency f=0.1Hz is set. Since τ_comp already compensates the the load-side gravity and friction, τ_des perturbs the arm from the initial position q_0=10. The subject is instructed to resist the perturbation and hold the arm in the static initial position. Accordingly, the active human torque then becomes τ_h,act=-τ_des. The training stage is carried out online for a duration of six seconds, yielding N=600 training samples. The results of the described experiment are shown in <Ref> and <Ref>. We observe that our GP-AKF method achieves high accuracy in estimating the torque with which the human resists the perturbation. On the other hand, the AKF method (blue) does not recover the true active torque τ_h,act. Due to the unmodelled passive dynamics, the AFK wrongfully estimates that the subject actively generates torques to compensate for the full motor torque τ_m instead of just τ_des, resulting in the offset visible in <Ref>. In comparison to the observer approaches, the spring torque does not follow the periodic profile of the true human torque in <Ref>. This is due to nonlinear saturation effects in the spring deformation, thus, leading to poor estimates of interaction forces using the available spring torque model. Hence, only our proposed GP-AKF approach is able to provide accurate estimates of the actively exerted human torques by inferring a correct model of the human-exoskeleton dynamics online. =-1 § CONCLUSIONS In this paper, we propose a novel approach for estimating the actively applied human torque during interaction with an elastic joint robot by combining GP regression with and augmented-state KF. Through the uncertainty quantification of the GP, we obtain a confidence measure of the state estimate that considers local model uncertainty and additionally derive guaranteed estimation error bounds for the GP-BayesFilter. Finally, the effectiveness of the approach is demonstrated in human-exoskeleton experiments. IEEEtran
http://arxiv.org/abs/2405.10101v1
20240516135527
Global analysis of the $U(3)^5$ symmetric SMEFT
[ "Riccardo Bartocci" ]
hep-ph
[ "hep-ph", "hep-th" ]
#1#2#3#4#1 #2, #3 (#4) Nuovo Cimento Nucl. Instrum. Methods Nucl. Instrum. Methods A Nucl. Phys. B Phys. Lett. B Phys. Rev. Lett. Phys. Rev. D Z. Phys. C ϵ^' ε → π^+π^-γ p K^0 K̅^̅0̅ α α̅ CP-1.80em/ PRISMA+ Cluster of Excellence & Mainz Institute for Theoretical Physics Johannes Gutenberg University, D-55099 Mainz, GermanyGlobal analysis of the U(3)^5 symmetric SMEFT Riccardo Bartocci May 20, 2024 ============================================= The U(3)^5 symmetry within the SMEFT framework restricts the inclusion of only fully flavor-conserving operators at dimension six. This proceeding presents a global analysis of the SMEFT under this assumption. We provide global constraints on all 41 Wilson coefficients, utilizing leading-order and next-to-leading-order SMEFT predictions for various experiments including parity-violating experiments, Electroweak Precision Observables (EWPO), Higgs physics, top quark interactions, flavor observables, dijet production, and lepton scatterings. We address issues concerning the constraints of specific four-quark operators, investigate correlations between observables at different energy scales, and assess the impact of next-to-leading-order contributions on the global fit. § INTRODUCTION While the Standard Model of particle physics has proven effective in describing particle interactions observed at colliders, it falls short in explaining various phenomena. Therefore, in the absence of direct evidence of New Physics (NP), the Standard Model Effective Field Theory (SMEFT) emerges as a valuable tool for rigorously parameterizing the effects of NP within the energy scale accessible by current experiments. Within SMEFT, the dimension-four Standard Model Lagrangian is extended including higher-dimensional operators involving only Standard Model fields and preserving the symmetries of the Standard Model. Conducting global fits of all dimension-six SMEFT operators is impractical due to the big number of independent Wilson coefficients. However, exploiting flavour symmetry allows for the reduction of independent parameters at the high scale. Given that flavor-violating observables already impose significant constraints on flavor-violating coefficients, it is reasonable to mitigate them through the assumption of minimal flavor violation (MFV), where only the Yukawa couplings serve as sources of U(3)^5 breaking. In this study, we consider an exact U(3)^5 flavor symmetry for dimension-six operators at high scales. With this assumption, combined with CP-symmetry, the number of independent operators is reduced from 2499 to 41. Our analysis utilizes data from various experiments including electroweak precision observables (EWPO), low-energy parity violation experiments, Higgs physics, top quark interactions, flavor observables, Drell-Yan (DY), and dijet production. Through these datasets, we perform a working global fit without any remaining flat directions. Since certain coefficients are poorly constrained at leading order (LO), we also incorporate next-to-leading order (NLO) contributions to the observables, highlighting the impact of these loop corrections on the fit. § SMEFT AND FLAVOUR SYMMETRY The SMEFT Lagrangian, at order 1/Λ^2 is expressed as ℒ_SMEFT=ℒ_SM+∑_i C_i/Λ^2 Q_i, where C_i are the Wilson coefficients of the dimension six operators Q_i and Λ=4 TeV (in this work) denotes the heavy scale associated with NP. The SMEFT theory predictions are considered only at linear order in the Wilson coefficients. Quadratic SMEFT contributions are suppressed by 1/Λ^4 and therefore as power suppressed as the dimension-eight linear contributions. The SMEFT serves as a tool for model independent studies of NP and a priori all the flavour structures are allowed. However, beyond the SM flavour violation is already severely constrained, therefore symmetries for the flavour sector can be assumed to reduce the amount of BSM flavour violation. We consider a complete flavour symmetry, namely U(3)^5, of the dimension six SMEFT operators, given by U(3)^5=U(3)_ℓ× U(3)_q× U(3)_e× U(3)_u× U(3)_d, where {ℓ, q, e, u, d} are the SM fermions. This assumption is called the minimal MFV because it contains the minimum amount of U(3)^5 breaking which comes from the Yukawa couplings of the dimension four Lagrangian. Once this assumption is imposed, there are 41 independent CP-conserving SMEFT coefficients <cit.>. Even if the dimension six operators are perfectly flavour symmetric, flavour violating observables are still generated via the renormalisation group flow: since the Yukawa matrices break the symmetry, the flavour violating dimension six operators can be produced at low scales, but their coefficients will depend only on flavour conserving high scale Wilson coefficients. § LEADING ORDER DATASETS AND DIJETS PRODUCTION The global fit includes the following datasets: EWPO <cit.>, Higgs as in  <cit.>, top <cit.>, low-energy parity violation experiments (PVE) and lepton scattering <cit.>, flavour observables <cit.>, Drell-Yan production <cit.> and dijet+photon production <cit.>. How the different operators appear in the various datasets can be seen in Figure <ref>. §.§ Theory predictions for dijet+photon production Some four-quark operators are particularly hard to constrain: top and flavour cannot set bounds on them. Therefore, an additional observable is needed and a natural candidate is dijets production at the LHC. However, due to the triggers for jets at the LHC, only very high energy data are available, in particular above the multi-TeV invariant masses. At so high energies, the terms quadratic in the SMEFT coefficients become bigger than the linear ones and this brings inconsistencies in the treatment of these quadratic contributions as theoretical uncertainty. To overcome the issue, we considered the production of two jets associated with a photon <cit.>. This slightly different process enables us to have acces to lower dijet invariant-mass, in particular m_jj< 1.1 TeV. Through Madgraph simulations and SMEFTsim <cit.>, we have evaluated the dimension six linear and quadratic SMEFT predictions for the differential cross section of this process, showing that the quadratic contributions are kept under control in this energy range. § GLOBAL ANALYSIS RESULTS AND COMPARISON BETWEEN LO AND NLO In Figure <ref>, we present the results of both the leading order (LO) and next-to-leading order (NLO) fits. Upon considering NLO contributions, significant improvements are observed in the bounds for C_qd^(1), C_qu^(1), and C_ud^(1). Initially, these three operators are constrained solely by dijet data at LO. However, with the inclusion of NLO effects, Higgs, Top, and EWPO datasets also contribute to setting bounds on these coefficients. Consequently, we observe an improvement of approximately two orders of magnitude compared to the LO fit. Despite the expectation of increased correlations among Wilson coefficients at NLO, leading to potential impacts on previously well-constrained operators at LO, we find that the bounds on other coefficients remain relatively stable in the NLO fit. Notably, the bounds on the ten LO EWPO operators exhibit remarkable stability. In the NLO fit, all bounds lie below |C|/Λ^2 < 10/TeV^2 with 95% confidence level, with the exception of the four-quark operators C_dd and C_dd^'. Among the EW operators, the only one notably affected is C_Hq^(1), whose bound weakens by a factor of 2 due to correlations with certain four-quark operators, notably with C_qq^(1) and C_uu, as detailed in <cit.>. In conclusion, all 41 Wilson coefficients of the U(3)^5 symmetric SMEFT are compatible with the SM within 2σ in our global fit. § ACKNOWLEDGMENTS I would like to express my gratitude to my collaborators, Anke Biekötter and Tobias Hurth, for their fruitful teamwork on this project. Many thanks also go to the organizers of Moriond 2024 EW for their excellent management of the conference. This work is supported by the Cluster of Excellence “Precision Physics, Fundamental Interactions, and Structure of Matter" (PRISMA^+ EXC 2118/1) funded by the German Research Foundation (DFG) within the German Excellence Strategy. § REFERENCES
http://arxiv.org/abs/2405.08765v1
20240514165837
Image to Pseudo-Episode: Boosting Few-Shot Segmentation by Unlabeled Data
[ "Jie Zhang", "Yuhan Li", "Yude Wang", "Stephen Lin", "Shiguang Shan" ]
cs.CV
[ "cs.CV" ]
inst1,inst2,inst4]Jie Zhang [mycorrespondingauthor]Corresponding author zhangjie@ict.ac.cn inst1,inst2]Yuhan Li yuhan.li@vipl.ict.ac.cn inst1,inst2]Yude Wang yude.wang@vipl.ict.ac.cn inst3]Stephen Lin stevelin@microsoft.com inst1,inst2]Shiguang Shanmycorrespondingauthor sgshan@ict.ac.cn [inst1]Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing, 100190, China [inst2]University of Chinese Academy of Sciences, Beijing, 100049, China [inst3]Microsoft Research Asia, Beijing, 100080, China [inst4]Institute of Intelligent Computing Technology, CAS, Suzhou, 215124, China Few-shot segmentation (FSS) aims to train a model which can segment the object from novel classes with a few labeled samples. The insufficient generalization ability of models leads to unsatisfactory performance when the models lack enough labeled data from the novel classes. Considering that there are abundant unlabeled data available, it is promising to improve the generalization ability by exploiting these various data. For leveraging unlabeled data, we propose a novel method, named Image to Pseudo-Episode (IPE), to generate pseudo-episodes from unlabeled data. Specifically, our method contains two modules, i.e., the pseudo-label generation module and the episode generation module. The former module generates pseudo-labels from unlabeled images by the spectral clustering algorithm, and the latter module generates pseudo-episodes from pseudo-labeled images by data augmentation methods. Extensive experiments on PASCAL-5^i and COCO-20^i demonstrate that our method achieves the state-of-the-art performance for FSS. Few-shot Segmentation, Few-shot Learning, Unlabeled Data § INTRODUCTION Deep learning has made tremendous progress in recent years with various neural networks performing well on many computer vision tasks, such as classification <cit.>, object detection <cit.>, and semantic segmentation <cit.>. Although neural networks are effective under normal circumstances, their demands on large amounts of annotated data limit their application, as data labeling is labor-intensive, especially for dense prediction tasks like semantic segmentation. To ease the dependence on large annotated data, there exist several approaches: 1) unsupervised pre-training <cit.> aims to provide a good initialization for downstream tasks while getting rid of the reliance on annotation during pre-training with unlabeled data; 2) weakly supervised <cit.> segmentation attempts to tackle the segmentation problem with weakly annotated data; 3) few-shot segmentation (FSS) emphasizes on how to effectively transfer the features learned from base classes to the novel classes for obtaining a good segmentation model with a few labeled data of novel classes. In this paper, we mainly focus on few-shot segmentation. Mainstream FSS methods <cit.> have an encoder-decoder structure as illustrated in Figure <ref>. The weight-shared encoder, which is usually a convolutional neural network (e.g., VGG <cit.> and ResNet <cit.>), extracts the features from the query image as well as the support images. Then these features are fed to the decoder together with the support masks. Finally, the decoder mines the correlation between the query and support data to predict the segmentation mask of the query image. Since a few data of novel classes are available in the FSS problem, a good initialization of the encoder is crucial. This is the reason why almost all FSS methods <cit.> require an ImageNet <cit.> pre-trained encoder; otherwise, they cannot achieve good performance. Besides, recent research has found that limited base data impairs the feature representation learning and usually causes serious over-fitting problems, so many methods turn to freezing the encoder, which can preserve the diversity of features as much as possible. But, unlike the encoder, the decoder only receives information from base data, which seriously restrains the generalization ability of the whole model. Currently, existing works mainly focus on modifying the structure of the decoder, which does not fundamentally solve the over-fitting problem. We believe that introducing more information from extra data is the key to addressing this issue. Since the cost of collecting a large number of finely labeled images is high, leveraging unlabeled data to boost the generalization ability of the model is an economical and promising solution. Under the FSS setting, the classes of data are divided into base classes and novel classes without overlap. The data from base classes are utilized for training the model and the data from novel classes are employed for testing. The process of training and test are both episodical <cit.>, which means a few annotated support images and a query image are fed into the model in pairs to generate the prediction for the query image. As shown in Figure <ref>, the set of query data and support data is called an `episode', and all data in an episode belong to the same class. The episodical setting implies that FSS models need to find similar regions between support and query images. Since the affinity map which consists of the cosine-similarity between each pixel pair of the feature map naturally contains abundant pixel-wise similarity information, exploring the affinity map of extra unlabeled images could be a feasible approach for gaining extra supervision. Inspired by this idea, we propose a novel method, named Image to Pseudo-Episode (IPE), to leverage large unlabeled data to improve the performance of FSS methods. As shown in Figure <ref>, our method consists of two modules, the Pseudo-label Generation Module (PGM) and the Episode Generation Module (EGM). The PGM utilizes the spectral clustering algorithm <cit.> and dense conditional random field algorithm <cit.> to generate pseudo-labels from the affinity map of unlabeled images. And the EGM utilizes data augmentation methods to generate episodes from pseudo-labels and images, which is inspired by contrastive learning. Since the usage of these extra episodes is consistent with the original episodes, the IPE is suitable for most FSS models without modifying their structure. Through the above process, IPE successfully brings rich information from abundant unlabeled data to improve FSS models. Recent studies <cit.> have demonstrated that unsupervisedly pre-trained models can generate relatively complete features and have achieved better performance than supervisedly pre-trained models on many downstream tasks (e.g., image recognition, object detection and semantic segmentation). But the performance of FSS models degenerates if we simply replace the encoder initialization from the model supervisedly pre-trained on ImageNet  <cit.> to the unsupervised one, as shown in Table <ref> and <ref>. This phenomenon suggests that the supervisedly pre-trained model on ImageNet plays an important role in existing FSS methods, which is probably because of the overlap between novel classes and classes in ImageNet. In view of the above, we believe that tackling the FSS problem with the unsupervisedly pre-trained model is more reasonable, leading us to push the frontier of the area by leveraging unlabeled data to boost Few-Shot segmentation. Extensive experiments on PASCAL-5^i <cit.> and COCO-20^i <cit.> demonstrate that our method boosts the performance of mainstream FSS models (i.e., PFENet <cit.>, BAM <cit.>) under both the supervisedly and unsupervisedly pre-trained settings. The performance of PFENet combined with our method under the unsupervisedly pre-trained setting even exceeds the original PFENet with supervised initialization. All these experiments demonstrate the effectiveness of our method from different aspects. § RELATED WORK §.§ Semantic segmentation Semantic segmentation is a fundamental computer vision task, whose goal is to recognize the category of each pixel of the input image. Prevalent segmentation methods <cit.> typically contain an encoder-decoder structure, where the encoder extracts the feature maps and the decoder builds the pixel-wise prediction by these maps. This architecture is derived from FCN <cit.>, which proposes the 1x1 Conv for pixel-wise classification. After FCN, Chen et al. <cit.> propose the dilated convolution and Zhao et al. <cit.> propose the pyramid pooling. Both of them enlarge the receptive field of the model and achieve significant improvements for segmentation. Although current semantic segmentation methods are effective, they have the same defect: a large amount of well-annotated data is required to build a reliable model and it is hard to adapt to tackle novel classes with only a few samples. §.§ Few-shot Segmentation Few-shot learning aims at using abundant base samples to train a model that can achieve good performance on novel classes with very limited data. Early attempts focus on few-shot image classification, and many classic methods <cit.> are proposed to tackle this challenging task. As research progressed, researchers began to consider addressing semantic segmentation tasks under the few-shot setting and were deeply inspired by the few-shot classification methods. OSLSM <cit.>, the first work on the Few-Shot Segmentation (FSS) problem, follows the episode setting of  <cit.> and slightly modifies it to fit the FSS problem. Also in OSLSM, the two-branch network which consists of a support branch and a query branch is proposed to extract features from several support images and one query image respectively. Then these features are merged to generate the prediction of the query image. After OSLSM, FSS has become a popular emerging research direction, then many excellent methods have been proposed one after another. Inspired by <cit.>, PL <cit.> proposes the prototypical network and utilizes global average pooling (GAP) <cit.> to generate prototypes from support images, where these prototypes support the model to predict the mask of the query image. SG-One <cit.> argues that generating a prototype by masked average pooling is better than GAP since it relieves the influences from background noises and forces the model to focus on learning object features. Currently, the two-branch architecture, prototype architecture and masked average pooling have become the most popular designs in tackling FSS problem. DGPNet <cit.> proposes a novel method utilizing dense gaussian process regression to tackle FSS problem, and achieves the higher accuracy. SVF <cit.> introduces a novel method for fine-tuning the backbone by selectively adjusting only the singular values in its weights. This approach stands in contrast to the common practice in most FSS methods, where the backbone is frozen to mitigate over-fitting. With the development of Vision Transformers, FSS methods based on the Transformer architecture <cit.> have also emerged. FPTrans <cit.> stands out as one representative work. It points out that simply replacing the CNN-based encoder with a Transformer does not lead to accuracy improvement for FSS. To address this, they propose an effective FSS method that fully utilizes the Transformer structure. Besides DCAMA <cit.> proposes using cross-query-and-support attention to weight the pixel-wise additive aggregation of support mask values for predicting the query mask, which is also implemented in the form of Transformer. The effectiveness of these methods shows that the Transformer is also a promising structure to tackle the FSS problem. Recently, Tian et al. <cit.> propose PFENet that utilizes training-free prior masks to enhance its generalization and design a feature enrichment module (FEM) for overcoming spatial inconsistency of objects. Due to the excellent design of FEM, several subsequent works have inherited this design or used it as a baseline, such as ASGNet <cit.> and SCL <cit.>. BAM <cit.> applies an additional base learner to recognize the base classes from input images to guide the prediction of novel classes, and achieves the sota results at that time. we adopt PFENet and BAM as our baselines. §.§ Self-supervised pre-training Self-supervised learning <cit.>, which aims at getting rid of the dependence on a large amount of labeled data, has achieved great success in several tasks (e.g., image recognition, object detection and semantic segmentation). Contrastive learning, an important and popular branch of self-supervised learning, learns similar representations of positive pairs and dissimilar representations of negative pairs, where the positive pair is formed from the same image through different augmentation pipelines and the negative pairs are formed from different images. To better adapt to downstream dense prediction tasks, several works <cit.> modify the generation process of positive/negative pairs or the contrastive loss. DenseCL <cit.> designs an effective contrastive learning method that directly works at the pixel level and achieves amazing performance on the downstream semantic segmentation task. § PROPOSED APPROACH §.§ Overview FSS aims at training a segmentation model on abundant annotated data, which can segment the object from novel classes C_N with only a few labeled samples. The "novel" means that these classes are not included in the base classes C_B that are usually used for training. In other words, C_B ∩C_N = ∅. Formally, we need to design a model ℱ that generates the pixel-wise prediction P of each class from input data X, , P = ℱ(X). Our goal is to optimize the parameters θ_ℱ of ℱ that can properly distinguish the classes for each pixel during the test. In segmentation tasks, the cross-entropy loss is usually employed as the loss function, which is defined as ℒ=-1/N·|Ψ|∑_i=1^N∑_j ∈Ψy_ijlog(p_ij), where Ψ is the set of all pixels, N is the number of all classes, p_ij is the probability of pixel j belonging to class i predicted by model ℱ and y_ij denotes the groundtruth of the pixel-level segmentation mask. If the pixel j actually belongs to the class i, y_ij will be 1; otherwise it will be 0. Most FSS methods follow the 1-way setting and the episode paradigm which are proposed in OSLSM <cit.>. The 1-way setting means there is only one target class to be predicted, which constrains N in Equation  <ref> to be equal to 2 (the other class is the background). The episode paradigm is shown in Figure <ref>, where each episode consists of two sets, , a query set Q and a support set S. The query set Q is a pair of the query image I_q and the corresponding groundtruth mask M_q, while the support set S consists of K support images {I_s^1, I_s^2,…, I_s^K} and the corresponding groundtruth masks {M_s^1, M_s^2,…, M_s^K} under the K-shot scenario. Then, the FSS model ℱ aims to predict the segmentation mask of the query image I_q by taking { I_s^1, I_s^2,…, I_s^K, M_s^1, M_s^2,…, M_s^K , I_q } as input. Most FSS methods have an encoder-decoder structure. The encoder extracts features from I_q and { I_s^1, I_s^2,…, I_s^K}, and the decoder utilizes these features together with { M_s^1, M_s^2,…, M_s^K} to infer the segmentation mask of I_q: ℱ(X) ≜𝒟(ℰ(I_q, I_s^1, I_s^2,…, I_s^K ), M_s^1, M_s^2,…, M_s^K ), where 𝒟 refers to the decoder network, ℰ refers to the encoder network which is usually initialized by a supervisedly pre-trained model on ImageNet <cit.>. Since most recent FSS methods freeze the encoder, each model mainly differs in the decoder. We focus on improving the decoder 𝒟 by leveraging the large-scale unlabeled data. Moreover, we explore the potential of employing an unsupervisedly pre-trained model to initialize the encoder ℰ, which we believe is a more reasonable setting for few shot segmentation tasks. Detailed results and discussions can be found in Section <ref>. §.§ Unlabeled Image to Pseudo-Episode For leveraging extra unlabeled data to boost the generalization ability of the decoder, one roadmap is to modify the structure of the decoder to fit with extra unlabeled data, which is difficult to achieve and also lacks scalability to other models. The other is to exploit pseudo-labels of the unlabeled data to extend episodical inputs. We resort to the latter roadmap by proposing Image to Pseudo-Episode (IPE) module 𝒫. Specifically, the module 𝒫 consists of two modules, , the Pseudo-label Generation Module (PGM) 𝒫_1 and Episode Generation Module (EGM) 𝒫_2, where the first module is for generating pseudo-labels from unlabeled data, and the second module is for generating pseudo-episodes from pseudo-labeled data: {I_qe,M_qe,I_se^1,I_se^2,…,I_se^K,M_se^1,M_se^2,…,M_se^k} = 𝒫_2(I_u, 𝒫_1(I_u)), where I_u is one unlabeled image; the left part of Equation <ref> denotes the input data batch for K-shot episode training. I_se and M_se refer to the support image and the corresponding mask, respectively. I_qe and M_qe refer to the query image and the corresponding mask, respectively. §.§ Pseudo-label Generation Module In this section, we introduce the Pseudo-label Generation Module (PGM) in detail, which aims to generate pseudo-labels from unlabeled data. As illustrated in Figure <ref> and Algorithm <ref>, the PGM firstly utilizes a feature extractor ℋ, which is pre-trained with unlabeled data (ImageNet-1K), to generate high-level feature F_u from an unlabeled image I_u: F_u = ℋ(I_u), where F_u∈ℝ^c× h× w, c is the number of channels, h and w is the height and width of the feature map. Then, we reshape the feature F_u to F_u^'∈ℝ^c× hw, and calculate affinity map A_u ∈ℝ^hw× hw by the cosine-similarity between each pixel-pair in the feature map F_u^': A_u^(i, j) = cos(f_i, f_j), i,j∈1, 2, …, hw, cos(f_i, f_j)=f_i^⊤· f_j/f_i·f_j, i,j∈1, 2, …, hw, where f_i and f_j are the feature vectors corresponding to pixels i,j, respectively. To find regions with high internal similarity, we perform spectral clustering <cit.> on the affinity map A_u. Then we choose the clustering result C_best with the highest Calinski-Harabasz score <cit.> from {C_m, …, C_n}, where the C_m and C_n are the clustering result with the minimal and maximum number of clusters, respectively. The Calinski-Harabasz score is defined as S_CH=tr(B_k)/tr(W_k)×n_ψ-k/k-1, where n_ψ is the number of all points, k is the total number of clusters, tr(*) represents the trace of a matrix, B_k is the between-class scatter matrix and W_k is the within-class scatter matrix. B_k and W_k are defined as below: W_k=∑_q=1^k∑_x ∈ C_q(x-c_q)(x-c_q)^⊤, B_k=∑_q=1^k n_q(c_q-c_ψ)(c_q-c_ψ)^⊤, where k is the total number of clusters, c_q is the center of the q-th cluster, C_q is the set of samples in the q-th cluster, n_q is the number of samples in set C_q, and c_ψ is the center of all samples. Then, we perform the dense-CRF on I_u with the C_best as the initial probability map to refine the boundary of each cluster. After that, we select the top-t clusters with the smallest average distance to the center of the image I_u as the pseudo-labels. The average distance is calculated as D_q = ∑_x ∈ qx - c_2/n_q, where n_q is the number of samples in the q-th cluster, and c is the center of the image I_u. The t pseudo-labels {L_1, …, L_t} generated by PGM will be sent to EGM, and all the results will be used as extra inputs for training. The reason why we choose multiple clusters lies in two aspects: 1) there may be multiple different foreground objects in a single image; 2) backgrounds with high internal similarity still help the FSS model learn how to find similar regions. §.§ Episode Generation Module After obtaining pseudo-labels for the unlabeled data, we further design Episode Generation Module (EGM) to generate extra episodes for training. As illustrated in Figure <ref>, we conduct image transformations to generate different views of the input image and the corresponding mask. Specifically, we conduct a set 𝒯, which consists of several image transformations. Under the K-shot setting, we randomly draw K + 1 subset 𝒯_1,…,𝒯_K + 1 from 𝒯 to generate extra episodes as below: {I_i, M_i} = 𝒯_i(I_u, L_u), i∈{1,…,K + 1}, where L_u denotes pseudo-label of unlabeled image I_u we generated from PGM. {I_i, M_i} represents the i-th transformed image and its label. We randomly choose one pair as the extra query set Q and the other K pairs as the extra support set S. § EXPERIMENTS To verify the effectiveness of our method, we conduct experiments on two popular datasets, , PASCAL-5^i <cit.> and COCO-20^i <cit.>. Since our method is model-agnostic, we choose two typical FSS methods, , PFENet <cit.> and BAM <cit.>, as the baseline model to combine with our IPE. §.§ Datasets PASCAL-5^i <cit.> and COCO-20^i <cit.> are two popular datasets for FSS evaluations. Different methods have slight differences in the usage of the datasets, especially in the number of samples for testing, and our experiments follow the settings of PFENet. For comparing fairly with existing FSS methods, all extra data used by IPE are the unlabeled images belonging to the training set of ImageNet-1k <cit.>. Note that almost all existing FSS methods use the supervisedly pre-trained model on ImageNet-1k as initialization for encoders while we explore the feasibility of employing an unsupervisedly pre-trained model as initialization. PASCAL-5^i <cit.> is based on extended PASCAL VOC which is constructed by PASCAL VOC 2012 <cit.> and extra annotations from SBD <cit.>. PASCAL-5^i evenly divides the 20 classes of extended PASCAL VOC into 4 folds (5 classes per fold) to execute cross-validation, namely, when one fold is selected as novel classes for the test, other remaining folds are used as base classes for training. COCO-20^i <cit.> is based on MS COCO <cit.>, which is more challenging than PASCAL VOC. These challenges mainly come from more complex scenarios and more classes. COCO-20^i evenly divides the 80 classes of MS COCO into 4 folds (20 classes per fold) to execute cross-validation, just like PASCAL-5^i. §.§ Evaluation metrics There are two metrics to evaluate FSS methods: Mean intersection over union (mIoU) and foreground-background intersection over union (FB-IoU). In our experiments, we choose both the mIoU and the FB-IoU as our evaluation metrics. The mIoU is the average of IoU over all the classes: mIoU=1/N∑_n=1^NIoU_n, where N is the total number of novel classes and the IoU of each class is calculated as IoU=TP/TP+FP+FN, where TP, FP and FN denote the number of true positive, false positive and false negative pixels of the prediction, respectively. The FB-IoU is the average of the foreground IoU and background IoU: FB-IoU=1/2(IoU_F + IoU_B), where the subscript F represents foreground and the subscript B represents background. §.§ Implementation details The encoders of PFENet <cit.> and BAM <cit.> are both ResNet-50 <cit.>, which is usually supervisedly pre-trained on ImageNet-1K. We also explore the feasibility of utilizing an unsupervisedly pre-trained model provided by DenseCL <cit.>. The batch size of extra episodes is four times the batch size of original inputs, and the image size of extra episodes is 225×225. In PGM, the feature extractor ℋ is the unsupervisedly pre-trained ResNet-50 <cit.>, whose parameters are from DenseCL <cit.>. All unlabeled images are resized to the shape of 672×672 before sending to the feature extractor. The cluster number m and n are 3 and 5, respectively. The pseudo-label number t is 2. In EGM, we serially use , , , , and with probabilities of 1.0, 0.4, 0.8, 0.2, 0.8 and 0.5 as various transformations. §.§ Results and analysis §.§.§ PASCAL-5^i The 1-shot and 5-shot results on PASCAL-5^i <cit.> are shown in Tables <ref> and <ref>, respectively. Under the 1-shot setting, after integrating with our IPE, PFENet achieves improvements up to 3.57% and 2.17% in terms of mIoU and FB-IoU, respectively. Moreover, when incorporating our IPE with a current strong baseline BAM, a promising result can be also obtained with the improvements of 0.8% in terms of both mIoU and FB-IoU. Besides, if the encoder of PFENet is initialized with an unsupervisedly pre-trained model, PFENet with IPE also outperforms PFENet with a larger improvement of up to 5.08% in terms of FB-IoU. Interestingly, the unsupervisedly pre-trained PFENet with IPE even outperforms the supervisedly pre-trained PFENet without IPE, achieving improvements up to 1.77% and 1.5% in terms of mIoU and FB-IoU. We believe this is because our method unearths the semantic information hidden in the unlabeled data and brings this relatively complete supervision from abundant data to FSS models. All these results show that our IPE is simple but effective. Under the 5-shot setting of PASCAL-5^i <cit.>, similar conclusions can be obtained that both PFENet and BAM achieve better results when inserting our IPE. The PFENet with IPE reaches significant improvements up to 9.1% and 7.73% in terms of mIoU and FB-IoU, respectively. Moreover, when combining our IPE with a strong baseline BAM, it also achieves further improvements up to 1.53% and 1.28% in terms of mIoU and FB-IoU. §.§.§ COCO-20^i The 1-shot and 5-shot results on COCO-20^i <cit.> are shown in Tables <ref> and <ref>, respectively. Under the 1-shot setting, after combining with our IPE, the mIoU of PFENet exhibits improvements up to 3.33% and 2.34% with the supervisedly and unsupervisedly pre-trained encoder, respectively. The unsupervisedly pre-trained PFENet with our IPE even achieves a comparable result to the supervisedly pre-trained PFENet without IPE. Besides, we conduct the experiments on BAM with our IPE. It can be seen that our method also outperforms BAM. More unlabelled data may further improve the accuracy, which is an interesting direction for further study. Under the 5-shot setting of COCO-20^i, similar conclusion can be obtained that the PFENet with IPE reaches a significant improvement up to 9.23% of mIoU, and the BAM with IPE reaches the improvements up to 1.45% of mIoU. All these results demonstrates the effectiveness of our IPE leveraging unlabelled data for boosting FSS. §.§.§ Result Visualization In Figure <ref>, we show a batch of pseudo-labels generated by PGM. Although the accuracy of some pseudo-labels is not very high, these pseudo-labels are good enough to provide extra information for semantics. Besides, the boundary of these labels is also clear enough with the help of dense-CRF. In general, the pseudo-labels generated by PGM significantly improve the generalization ability of the FSS models. In Figure <ref>, we show a batch of predictions generated by PFENet on PASCAL-5^i under the 1-shot setting with or without IPE. Specifically, in group-0, our IPE helps to achieve more complete segmentation results of “boat” and “aeroplane”. In group-1, the PFENet with IPE well segments the “chair” out correctly; and in group-2, our IPE corrects the original PFENet segmentation error of treating the base class “chair” as the novel class “dining table”. Besides, for group-3, our IPE can well remove the false positive regions on the background for “train” and “tv monitor”. All these results indicate that our IPE can improve the segmentation accuracy with more complete coverage of foreground objects and fewer false positive regions on the background. § LIMITATION Although our approach allows incorporating additional unlabeled data into the training of FSS models to improve their generalization ability and accuracy, it still has its limitations. The main limitation of our IPE may lie in struggling on dealing with unlabeled extra data under complex scenes. Since the image of complex scene has many objects of different classes, it is hard for PGM to generate accurate pseudo labels with spectral clustering. Fortunately, we can easily collect images of limited objects, like ImageNet, which can significantly boost the accuracy of few-shot segmentation.In the future, we will explore new metrics to measure the complexity of the image to flexibly select appropriate samples or resort to multi-model self-supervision learning for better understanding images under complex scenes. § CONCLUSION We propose a novel model-agnostic method, named Image to Pseudo-Episode (IPE), to boost the performance of FSS methods by leveraging extra unlabeled data. Through generating pseudo-labels from unlabeled images by PGM and generating pseudo-episodes from the pseudo-labeled images by EGM, our IPE provides a bridge to introduce the extra information from unlabeled data into FSS models, thereby improving the generalization ability of these models.. With the help of IPE, both PFENet <cit.> and BAM<cit.> achieve better results on PASCAL-5^i and COCO-20^i in terms of qualitative and quantitative evaluations.
http://arxiv.org/abs/2405.10000v1
20240516113434
Lack of differentiability of semigroups associated to delayed abstract thermoelastic systems
[ "Kaïs Ammari", "Makrem Salhi", "Farhat Shel" ]
math.AP
[ "math.AP" ]
#1^#20pt#1^#2 #1_#20pt#1_#2 =cmmib10 at 10pt =cmr10 at 8pt =cmbx10 at 8pt B ^∘ =cmr17 =cmr8 scaled5 =lasy10 '062 2pt 2pt plain theoremTheorem[section] corollaryCorollary[section] lemmaLemma[section] propositionProposition[section] definition definitionDefinition[section] remark remarkRemark[section] remark remarksRemarks[section] remark examExample[section] abstAbstract[section] equationsection =29.7truecm =21truecm =-1.5truecm =0.5truecm =0.5truecm namedefsubjclassname@2020 Mathematics Subject Classification =6.3truein =9.5truein
http://arxiv.org/abs/2405.09528v1
20240515173728
Energy-Efficient Sleep Mode Optimization of 5G mmWave Networks Using Deep Contextual MAB
[ "Saad Masrur", "Ismail Guvenc", "David Lopez-Perez" ]
eess.SP
[ "eess.SP", "cs.AI" ]
Energy-Efficient Sleep Mode Optimization of 5G mmWave Networks Using Deep Contextual MAB Saad Masrur^1, İsmail Güvenç^1, David López-Pérez^2 ^1Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC ^2Universitat Politècnica de València, Valencia, Spain {smasrur,iguvenc}@ncsu.edu This research is supported in part by the NSF project CNS 1910153, the Generalitat Valenciana through the CIDEGENT PlaGenT, Grant CIDEXG/2022/17, Project iTENTE, and by the action CNS2023-144333, financed by MCIN/AEI/10.13039/501100011033 and the European Union “NextGenerationEU”/PRTR.” May 20, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty empty Millimeter-wave (mmWave) networks, integral to 5G communication, offer a vast spectrum that addresses the issue of spectrum scarcity and enhances peak rate and capacity. However, their dense deployment, necessary to counteract propagation losses, leads to high power consumption. An effective strategy to reduce this energy consumption in mobile networks is the sleep mode optimization (SMO) of base stations (BSs). In this paper, we propose a novel SMO approach for mmWave BSs in a 3D urban environment. This approach, which incorporates a neural network (NN) based contextual multi-armed bandit (C-MAB) with an epsilon decay algorithm, accommodates the dynamic and diverse traffic of user equipment (UE) by clustering the UEs in their respective tracking areas (TAs). Our strategy includes beamforming, which helps reduce energy consumption from the UE side, while SMO minimizes energy use from the BS perspective. We extended our investigation to include Random, Epsilon Greedy, Upper Confidence Bound (UCB), and Load Based sleep mode (SM) strategies. We compared the performance of our proposed C-MAB based SM algorithm with those of All On and other alternative approaches. Simulation results show that our proposed method outperforms all other SM strategies in terms of the 10^th percentile of user rate and average throughput while demonstrating comparable average throughput to the All On approach. Importantly, it outperforms all approaches in terms of energy efficiency (EE). empty Index Terms— Beam-forming, contextual MAB, mmWave, reinforcement learning, sleep mode optimization. § INTRODUCTION empty The exponential growth in cellular data demand necessitates an increasing amount of spectrum and has spurred rapid expansion of mobile network infrastructure in recent years. Millimeter wave (mmWave) communications have emerged as a promising technology in fifth-generation (5G) cellular networks. Offering substantial bandwidth, mmWave networks present a viable solution to the pressing issue of spectrum scarcity <cit.>, <cit.>. However, mmWave signals are susceptible to blockage and experience considerable attenuation. To mitigate propagation loss, mmWave BSs are densely deployed with inter-site distances in the order of hundreds of meters <cit.>, are equipped with large antenna arrays, and utilize efficient spatial multiplexing. The primary source of power consumption in these BSs is the radio frequency (RF) chain. While the deployment of a large number of RF chains within a BS can be mitigated by combining analog precoding with digital precoding <cit.> (i.e., hybrid precoding), the energy consumption remains significant. Analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) in an mmWave BS necessitate a considerably higher sampling rate compared to sub-6 GHz systems due to their operation at higher frequencies with larger bandwidths. Given that the power consumption of an ADC/DAC is proportional to the sampling rate, an RF chain in a mmWave BS consequently consumes a substantial amount of energy <cit.>. These factors result in substantial energy consumption, raising both economic and environmental concerns. Specifically, the Information and Communication Technology (ICT) industry is projected to account for approximately 23% of the global carbon footprint and about 51% of global electricity consumption by 2030 <cit.>. Addressing this issue through energy-efficient wireless communication has been a significant research focus for over a decade. In cellular networks, BSs account for 60% to 80% of total energy consumption <cit.>, and interestingly, BS traffic load is less than a tenth of the peak value for 30% of the time on weekdays <cit.>. This presents an opportunity for energy reduction through dynamic sleep mode optimization (SMO) (i.e., turning on and off BSs). The strategy is to put a BS into sleep mode (SM) when it is serving fewer user equipment (UEs) and shift its load to a nearby BS. However, this can affect the network coverage and decrease the user performance (i.e., throughput, delay). To optimize the balance between energy use and system throughput, we must examine the correlation between these two metrics, and use this information to optimize the BS activation and deactivation. Formulating the optimal BS on/off is, however, a significant challenge. It demands insights such as the number of UEs a specific BS is serving, the resources they are using, how their position will evolve in time, to which cell they will handover, etc. However, such information is typically not available and hard to predict in real-world scenarios, making the task even more difficult. In this paper, we provide a novel learning-aided SMO that aims to conserve energy while maximizing the overall system throughput. Our framework uses reinforcement learning (RL) and takes into account key aspects such as dynamic UE distribution and state-of-the-art beamforming. Importantly, we trained and tested our RL model over near real-world conditions using advanced modeling. To the best of our knowledge, this is the first study that employs a 3D map for the SMO of mmWave networks. Our solution comprises two phases: The first phase involves strategically deploying BSs within a 3D environment. Initially, numerous potential BSs are randomly placed, followed by selecting a subset of them to maximize spatial coverage across the entire area. This subset will be utilized in the subsequent phase to develop an SMO strategy. The key contributions of this paper include the following: * Incorporating a 3D urban environment model for mmWave communications, capturing real-life scenarios. * Utilizing beamforming techniques in conjunction with the SMO approach to enhance mmWave link budget, thereby reducing energy consumption for UE. * The Proposed approach differs from existing literature where multi-armed bandits (MABs) algorithms are used, which suffer from scalability and cannot account for crucial contextual information. UEs are clustered into their corresponding tracking areas (TAs), and this information is utilized as context for the proposed algorithm. Our proposed neural network (NN)-based contextual multi-armed bandit (C-MAB) referred to as NN-based C-MAB algorithm effectively manages larger state and action spaces by leveraging NN and contextual information. * Explored the effectiveness of the proposed algorithm by comparing it with other SM strategies: Random, Epsilon Greedy, Upper Confidence Bound (UCB), Load Based <cit.>, and the All On [All On approach, can be considered a special case of SM where no BSs are put into SM.]. The subsequent sections of the paper are structured as follows. Section II provides a comprehensive review of the existing literature. Section III describes the system model and problem formulation. Section IV presents our proposed algorithm for SMO and other SM strategies. Section V details the numerical results and analysis. Finally, conclusions are drawn in Section VI. § LITERATURE REVIEW Despite the extensive research on SMO/on-off scheduling of BSs, a detailed investigation of SMO in mmWave under diverse BS and UE distributions is not available. BS sleep cycles vary in duration: slow cycles last minutes to hours, while fast ones span seconds to minutes <cit.>–oh2011toward<cit.>. Queueing theory has been extensively studied to strike a balance between energy savings and the degradation of Quality of Service (QoS) <cit.>, <cit.>. In <cit.>, the authors introduced energy-efficient sleep SM techniques for cell-free mmWave massive MIMO networks. They simplified the scenario by modeling the UE locations using a log-normal distributed traffic map. However, this approach, while making the problem more tractable, may not accurately reflect real-world conditions. Deep learning (DL) and RL have been effectively applied to enhance wireless communication systems. The study in <cit.> investigates a joint stochastic problem related to the placement of BSs and beam steering, aiming to optimize mmWave network coverage. It assumes that the network is aware of the user's position beforehand, while the user's orientation is subject to stochastic changes. However, this assumption may not mirror real-world scenarios accurately, as a user's location can vary considerably over time. Moreover, the study does not directly address the reduction of energy consumption. In <cit.>, the authors focus on UE power consumption in mmWave M2M communication using beam-aware discontinuous reception (DRX). However, it overlooks BS power usage and assumes periodic beamforming, potentially leading to inefficient resource allocation due to varying data transmission needs. Study <cit.>, used a single-agent multi-armed bandit (SA-MAB) for small cell activation by a macro BS. However, it did not analyze user distribution. Similarly, the MAB approach is employed for spectrum scheduling in <cit.>. Both methodologies, however, face scalability issues as the complexity of the MAB problem increases with the number of arms, making it difficult to identify the optimal solution. SM operation based on deep Q-network (DQN) has been explored in <cit.>, for wireless local-area networks (WLANs). The interrupted Poisson process (IPP) model was employed to simulate bursty traffic patterns. However, the study did not include an analysis of the system’s overall throughput as a result of the SM. The IPP model may not suit all cases, especially mmWave scenarios that depend heavily on line-of-sight (LOS). A significant portion of prior research heavily depends on idealistic assumptions about user movement, such as Poisson traffic models and threshold-based scheduling. However, some of this work encounters scalability issues. Given the complexity of real-world traffic, there is a need for a more flexible and model-free approach capable of solving highly complex problems. § SYSTEM MODEL Our study is based on an urban macro (UMa) outdoor-to-outdoor communication scenario. This scenario incorporates various elements such as terrain data, landmarks, and street routes, all of which are illustrated in Fig. <ref>. Note that free 3D geographic data of some real environments can be obtained from publicly available OpenStreetMap (OSM) <cit.>, to represent some real-world environments. Using this data, we have constructed a 3D map with a Digital Elevation Model (DEM). This map is structured as a grid, where each square contains data about the location and height of its central point. The model includes buildings of varying heights and widths selected within the ranges of 8-25 meters and 20-45 meters, respectively. The BSs are positioned on the rooftops, while the UEs are located at ground level. We focused on outdoor communication due to the nature of cellular mmWave networks, therefore, we have defined the service area (SA) as the total area excluding the area covered by the buildings. This approach ensures that our focus remains on the areas that require service. The N_c candidate BS's locations are represented by the set which represents the boundaries of the building facing the SA, 𝒫_ c^BS∈{(x_i^BS, y_i^BS, z_i^BS) |∀ i ∈[N_ c]}. Given the potentially large value of N_c, it becomes impractical to position BSs at every building and their respective boundaries. Moreover, simply placing an excessive number of BSs might yield promising results, but this approach does not reflect the real-life scenario where BSs are strategically positioned. In our study, we first strategically minimized the BS candidate locations in a UMa scenario. We used an iterative algorithm that selects the BS with the maximum visibility. Visibility refers to the extent to which a grid is observable from a given location, which is calculated using voxel viewshed algorithm <cit.>. The iterative algorithm adds the BS that improves visibility the most in each iteration. This process continues until no further visibility improvement is possible, resulting in a reduced set of BSs that optimally covers the whole SA, and the reduced collection of BSs positions is denoted by the set 𝒫_ r^BS∈{(x_i^BS, y_i^BS, z_i^BS) |∀ i ∈[N_ r]} . where N_ r (i.e., N_ r≪ N_ c) represents the total number of BS locations that are capable of covering the entire SA. The SA is split into M grid points where the UEs can be located, and the positions of these points are defined as follows: 𝒫^SA∈{(x_i^SA, y_i^SA, z_i^SA) |∀ i ∈[M]} . The received signal strength of user i from BS j can be expressed as: P^rcv_i,j=P^tx_j+ Υ_i,j+_i,j . where P^tx_j denotes the transmit power of j^th BS, Υ_i,j represents the directivity gain due to beamforming, and _i,j is the path loss between user i and BS j. The path loss models for the UMa scenario, considering both LOS and non-line of sight (NLOS) conditions, have been utilized as outlined in the 3GPP TR 38.901 technical report <cit.>: _i, j = {[ 28.0 + 20log_10(f_ o) + 22log_10(d^3D_i,j), for LOS; 32.4 + 20log_10(f_ o) + 30log_10(d^3D_i,j), for NLOS ]. where d^3D_i,j is the 3D distance (in meters) between the BS j and the UE i, and f_ o is the center frequency normalized by 1 GHz. We defined binary variables u_i,j, and s_i,j to represent the user association to BS, and the presence of a user within the SA of other BSs except the serving BS, respectively: u_i, j ≐{[ 1, user i is served by BS j; 0, otherwise ],. s_i, j ≐{[ 1, user i is in SA of BS j and u_i, j≠ 1; 0, otherwise ],. where i =1,2, …, U, j=0,1, …, N, (i.e., U≤ M, N≤ N_ r). U is the total number of users that can be located anywhere within the SA, as defined by the set 𝒫^SA. On the other hand, N denotes the number of BSs whose locations are selected from the reduced set 𝒫_ r^BS (𝒫_ r^BS⊆𝒫_ c^BS). The binary variable s_i, j quantifies interference from non-serving, nearby BSs. The throughput of the user i can be calculated as: T^put_i=B_i^wlog_2(1+∑_k=1^N u_i, k P^rcv_i,k/∑_k=1^N s_i, k P^rcv_i,k +ψ_i)  . where B_i^w denotes the bandwidth allocated to user i, and the noise ψ_i is given by ψ_i=Γν B_i^wξ. In this equation, Γ is Boltzmann’s constant, ν represents the temperature, and, ξ is the noise figure which quantifies the degradation of the signal-to-noise ratio (SNR). The product of Γ, ν, B_i^w represents the thermal noise and ξ represents the impact of the noise figure on the throughput assigned to user i. Energy efficiency (EE) is a crucial performance metric that quantifies the amount of energy consumed per received information bit. It can be mathematically represented as: =∑_i=1^UT^put_i/∑_j=1^N P_^j . In a mmWave network, the power consumed by the j^th BS, denoted as P_^j, is a combination of the power consumed by the Base Band Unit (BBU) and the Active Antenna Unit (AAU). This is described in <cit.> as follows: P_^j = (P_BBU + P_AAU )/(1 - ϱ_cooling )(1 - ϱ_DC ). Here, ϱ_cooling and ϱ_DC represent the power consumption of the cooling module and DC conversion loss, respectively. The power consumed by the BBU and AAU are denoted by P_BBU and P_AAU, respectively. The values for the power consumption of different modules are taken from <cit.>. §.§ Problem Formulation The primary goal of this research is to transition the BS into SM in a manner that minimally impacts the system’s overall throughput. By doing so, we can conserve energy from both the BS and UE perspectives. This is because putting the BS into SM helps to narrow down the beam search space, thereby reducing the power consumption of the UE. The operational status of the BS is represented by a binary variable _j∈{0,1}, where j=1,2,…,N. A value of 1 indicates that the BS is on, while 0 signifies that it is off. The objective function can be formulated as: max  ∑_i=1^UT^put_i subject to  ∑_i=1^U u_i,j=1 , N-∑_j=1^N_j= ⌊α_off N ⌋  . The optimization problem in (<ref>) is complex due to unknown user distribution and varying BS load. Constraint (<ref>) ensures only one BS serves a UE, while (<ref>) forces ⌊α_off N ⌋ BSs to be deactivated. α_off is the SM activation parameter, which represents the percentage of BS to be turned off, where 0 ≤α_off≤1. § SLEEP MODE STRATEGIES In this section, we elaborate on our proposed NN-based C-MAB SMO algorithm. Additionally, we defined alternative SM strategies such as Load Based, Random, Epsilon Greedy, UCB, and All On, with which we will compare the performance using computer simulations. §.§ NN-Based C-MAB Formulation for SMO The optimization problem (<ref>) discussed earlier is non-convex and falls into the category of NP-hard problems, which are particularly challenging to solve. Traditional methods, while useful, often make idealized assumptions that can lead to a loss of precision. However, the introduction of deep RL has provided a more robust approach, capable of handling more realistic assumptions and managing the nonlinear mapping from the state/context space to the action space. To address the complexities and inherent difficulties of this non-convex problem, we propose the use of a NN-based C-MAB approach. To formulate the SMO for mmWave as an RL task, we represent the system as a combination of five components, as shown in Fig. <ref>: 1) an environment emulator, which encapsulates the behavior of the BS and UEs; 2) a context generator, which helps the agent in taking action based on the environment; 3) an RL agent, which reacts to the context and takes an action; 4) a reward function, which emits a scalar reward based on the action taken by the agent; and 5) a replay buffer, which serves as the data plane to update the belief of the RL agent. In each iteration t, the context generator block produces a context 𝐜_t, for the RL agent, which takes an action 𝐚_t^C-MAB. The reward function emits a reward r_t, based on the action taken, which is then stored in the replay buffer with the context. A batch 𝐁, from the buffer is used to update the RL agent's weights. This process optimizes the SM for mmWave systems, ensuring network efficiency. C-MAB <cit.>, enables the agent to choose future actions based on contextual information, actions (i.e., arms), and rewards learned from past observations. An NN is incorporated into the C-MAB framework to effectively manage the infinite state space and enhance decision-making based on diverse contexts, while the exploration dilemma is handled using an epsilon decay over time, and NN is responsible for exploiting the best action. Unlike the MAB algorithm, which selects actions independently of the environment's state, the C-MAB algorithm tailors its decisions to the observed context, thereby enabling a more personalized approach to each situation. Upon observing the context, the C-MAB algorithm selects an action that yields a reward. The overarching objective of this algorithm is to maximize the cumulative reward, thus optimizing the decision-making process. In the context of the MAB algorithm, each round allows the agent to perceive the reward of the chosen arm. The agent's objective is to reduce the regret, defined as the discrepancy between the chosen arm ρ_ c(t) and the optimal arm ρ^*(t) that could have been selected. The cumulative regret over the entire time horizon is given by: =𝔼[∑_t=1^T ρ^*(t)-∑_t=1^T ρ_ c(t)]  . where expectation accounts for the algorithm's randomness and the disclosed rewards. In the MAB problem, the goal of the proposed algorithms is to balance exploration and exploitation. Exploration assesses potential arms, while exploitation maximizes reward from the best arm. §.§.§ NN-based C-MAB In the SMO framework, we employ an NN-based C-MAB combined with an epsilon decay strategy to balance exploration and exploitation. In the early stages of learning, the model faces a high degree of uncertainty due to limited data, necessitating broad exploration of the environment. As the model accrues more data, it gains a deeper understanding of the environment, reducing uncertainty and shifting the focus towards exploiting the optimal action to maximize cumulative reward. Consider a tuple (𝒞, 𝒜, R) representing the core elements of an RL process. Here, 𝒞 is the context or state space with environmental data, is the action space with A_total possible actions, and R is the reward function. An RL agent interacts with an environment to model P(r|a,c), observing a context 𝐜_t ∈𝒞 at each time step t. The agent selects an action 𝐚_t ∈ and receives a reward r_t=R(𝐚_t^C-MAB,𝐜_t) based on the chosen action in the observed context. However, the agent cannot observe the reward from unchosen actions. The agent's objective is to maximize the cumulative reward Λ_t=∑_t=1^Tκ^t-1r(𝐚_t^C-MAB,𝐜_t), where κ is the discount factor. The design of these essential elements significantly impacts the algorithm’s convergence. * Action space : To formulate the SMO using NN-based C-MAB, we defined the action space as follows: = {_1∈{0,1}, …, _N∈{0,1}| ∑_j=1^N _j = ⌊α_off N ⌋} . In time slot t, action 𝐚_t^C-MAB ∈ switches the BS station between sleep and active modes, and the action vector 𝐚_t^C-MAB is defined as: 0.9!𝐚_t^C-MAB=[_1∈{0,1}, _2∈{0,1},⋯, _N∈{0,1}] . The total number of actions is represented by: A_total = N⌊α_off N ⌋. * Context/State space 𝒞: The context at each time step t is generated using K-means clustering, which groups UE based on their TA. The exact location of the UE is unknown to the BS due to privacy concerns, and it is computationally expensive to determine the exact location of the BS to which the UE is connected. However, the system is aware of the TA in which the UE is located, and it updates whenever the UE moves to a new TA. The UE is clustered into K clusters based on their TA. The coordinates of the clusters and the density of each cluster from the context can be defined as follows: 0.9!𝐜_t=[(dx_1,dy_1), (dx_2,dy_2),⋯, (dx_K,dy_K), μ_1, μ_2,⋯, μ_K], where, dx_i, and dy_i represent the center x and y coordinates of the clusters i ∈(1, ⋯, K), respectively, and μ_i represents the proportion of UEs in cluster i to the total number of UEs in the entire area. * Reward: We choose the reward at a given time step t, represented as r_t(𝐚_t^C-MAB,𝐜_t), to be the 10^th percentile of the total user throughput which promotes fairness by improving experiences for users with the lowest throughput: r_t(𝐚_𝐭^𝐂-𝐌𝐀𝐁*,𝐜_t) = Percentile_10(∑_i=1^UT^put_i)  . where 𝐚_𝐭^𝐂-𝐌𝐀𝐁* is the action taken at time t out of A_total (i.e., A_total= ||) possible actions. This reward function is robust against outliers and motivates overall network performance enhancement. A neural network with L hidden layers is employed to model P(r_t|𝐚_𝐭^C-MAB,𝐜_𝐭, Θ), where Θ denotes the model's weights. The network is trained via an Adam optimizer, incorporating L2 regularization (λ). At each time step t, the context is input to the network 𝐜_t, where t ∈1,2,⋯,T. The action yielding the highest expected reward is selected. Updating the model after every iteration can be computationally expensive, resulting in noisy updates, overfitting, and an inability to handle concept drifts. Therefore, the model is updated every τ_update iterations using a randomly chosen batch of size |B| from the replay buffer. At each iteration, a random action is selected with a probability of ϵ^C-MAB, while the action chosen by the NN is selected with a probability of (1-ϵ^C-MAB). The value of ϵ^C-MAB decreases with each iteration according to the formula ϵ^C-MAB = ϵ^C-MAB * ϵ_th^C-MAB, where 0 < ϵ_th^C-MAB<1 to control the rate of decay. A high value (close to 1) means slow decay and more exploration, while a low-value means fast decay and quicker transition to exploitation. This strategy ensures that the model explores adequately in the early stages and gradually transitions towards exploitation as its knowledge base expands. §.§ Load Based SM Strategy <cit.> In the Load-based algorithm, the BS with the lowest load is designated to enter SM. The load factor for each UE i is defined as follows: L_i^UE = {[ 1/∑_j=1^N u_i, j+s_i, j, if u_i, j or s_i, j > 0; 0, otherwise ]. , where 1/∑_j=1^N u_i, j+s_i, j denotes the count of BSs from which a UE can receive service. Based on this, the load value of j^th BS can be defined as the sum of load factors of UEs associated with the j^th BS, as follows: L_j^BS=∑_i=1^U u_i,jL_i^UE . Under this algorithm, a total of ⌊α_off N ⌋ BS will be turned off. Notably, these are the BS characterized by the minimum load values L_j^BS. §.§ Random SM Strategy In the Random SM Strategy, a total of ⌊α_off N ⌋ BS are independently selected for SM. Employing this random approach facilitates a fair comparison with other strategies, as it serves as a baseline for evaluating the effectiveness of more sophisticated algorithms. §.§ UCB SM strategy The UCB bandit algorithm <cit.>, a state-of-the-art MAB algorithm, addresses the challenges posed by non-stationary environments. It emphasizes the importance of exploring various actions while also exploiting the most promising ones to maximize total rewards. This strategy embodies the principle of `optimism in the face of uncertainty', striking a balance between exploration and exploitation for optimal performance. The action space is similar to that of our NN-based C-MAB, defined in (<ref>). Mathematically, the UCB algorithm selects an action 𝐚_𝐭 at iteration t with the highest upper confidence bound, given by: 𝐚_𝐭^𝐔𝐂𝐁 =k∈{1,…,A_total}max( ω_k + δ√(2 ln(t)/n_k)) . where, ω_k and n_k represent the average reward obtained from action k ∈𝒜 and the number of times action k has been selected, respectively. §.§ Epsilon Greedy SM Strategy The Epsilon Greedy SM approach balances exploration and exploitation using a parameter, epsilon ϵ^greedy, to regulate the choice between random selection and maximizing expected rewards. It randomly selects actions with probability ϵ^greedy and prioritizes actions with higher average rewards with probability 1-ϵ^greedy. §.§ All On Strategy In the All On Strategy, as the name suggests, no BSs are turned into SM. This strategy serves as a crucial benchmark for evaluating the effectiveness of other SM strategies. It provides a baseline comparison to assess the impact of SM strategies on network performance. § NUMERICAL RESULTS AND ANALYSIS In this section, we evaluate the efficiency of the proposed SMO using a 3D model that simulates a UMa outdoor-to-outdoor communication scenario, as depicted in Fig. <ref>. The 3D model, focusing on outdoor users, spans an area of 129  m× 206  m× 45  m (x, y, z, respectively), with the UE height set at 1.5 m above the ground level. The entire area is divided into a grid with a resolution of 1  m× 1  m. The BSs are positioned at the edges of buildings, excluding those near the boundary of the area, resulting in a total of N_ c=143 BS candidate locations, denoted as 𝒫_ c^BS (<ref>). As explained in Section <ref>, we reduce the candidate BS locations to N_ r=31, with locations given by 𝒫_ r^BS (<ref>). These BS can provide coverage to the entire SA 𝒫^SA (<ref>), which consists of M=11504 grid points where a UE could potentially be located. In this study, we consider a system operating at a carrier frequency of f_ o=28 GHz and a transmit power of P_tx=20 dBm. The system has a total bandwidth of 50 MHz and utilizes a round-robin allocation scheme. We use Boltzmann's constant Γ, valued at 1.38× 10^-23, and assume a temperature ν of 298 Kelvin. The noise figure ξ in our system is 9 dB. An NN architecture is utilized, consisting of two hidden layers (L=2), with 128 and 64 neurons respectively. The Rectified Linear Unit (ReLU) activation function is applied in the hidden layers, while the output layer uses a linear activation function. L2 regularization is incorporated to prevent overfitting, with a parameter of λ=1 × 10^-4. For the epsilon decay algorithm, ϵ^C-MAB=0.7 is used for the exploration-exploitation trade-off, and the decay rate ϵ_th^C-MAB is set to 0.9. For the Epsilon Greedy algorithm, we set ϵ^greedy to 0.4, and for the UCB algorithm, δ is set to 4. To ensure a fair comparison, the reward used by both the Epsilon Greedy and UCB algorithms is similar to that of NN-based C-MAB (<ref>). The model weights Θ are updated after every τ_update=8 iterations, and the batch size is set to |𝐁|=256. In Fig. <ref>, Fig. <ref>, and Fig. <ref>, we conduct experiments where we randomly select N=15 BSs from 𝒫_ c^BS and place U=70 UEs randomly in 𝒫^SA at each iteration t. We assume K=10 TA (equivalent to 10 clusters), K-means clustering is used to cluster UE is their TA. SM activation parameter of α_off=0.3, indicating that 30% of the BSs will be put into SM. §.§ Performance of NN-Based C-MAB Fig. <ref> presents a comparison of our proposed NN-based C-MAB algorithm with SM strategies in terms of average cumulative throughput. Our proposed algorithm outperforms all other SM strategies, achieving an average (over the number of users) cumulative throughput of 46.3874 Mbps, compared to roughly 43 Mbps for the other approaches. The proposed method rate is close to the All On approach (where all BSs are On), which achieves a data rate of 51.1102 Mbps. The Fig. <ref> illustrates the cumulative 10^ th percentile (cumulative reward) user rate for all methods. Our proposed NN-based C-MAB outperforms all other approaches, even the All On approach. This superior performance is attributed to effective SMO strategy and interference management. In a dense network, high interference from nearby BSs can degrade signal quality, particularly for UEs at the cell edge or in high interference zones, resulting in lower overall throughput as shown in Fig. <ref>. However, turning off a BS alters the interference landscape, leading to reduced interference. Consequently, the rate for the worst 10% of UEs improves, yielding a higher 10^ th percentile rate. This illustrates a trade-off between total throughput and quality of service, as enhancing the rate for the worst-performing UEs can boost the overall service quality. The random SM strategy performed the worst, as anticipated, across both metrics. Meanwhile, the Epsilon Greedy, Load Based, and UCB approaches showed similar performance. Although the Greedy approach can often perform well, its performance isn't always guaranteed. On the other hand, the UCB algorithm in this case might select suboptimal actions due to its ineffective ability to balance between exploration and exploitation as compared to NN-based C-MAB. The Load Based method is not working well in terms of both throughput and 10^ th percentile. The instantaneous load of a BS in real networks can fluctuate due to blocked calls, association policies, traffic patterns, and transmission rates, making it an imperfect representation of the exact load distribution. §.§ Normalized EE Analysis In addition to the overall system throughput and the 10^th percentile of user rates, we also evaluate the EE of the system. Fig. <ref> shows the normalized energy efficiency (NEE) with a moving average calculated over 200 iterations to temper the effect of short-term fluctuations. The proposed NN-based C-MAB approach demonstrates superior NEE compared to the other methods. EE is also a function of throughput. The other SM approaches achieved lower throughput compared to the proposed approach, resulting in reduced EE. This indicates that the NN-based C-MAB approach excels in resource utilization, achieving higher throughput with less energy. §.§ Rate versus Number of Users Next, we examine a scenario with a fixed number of BSs, denoted as N=15, while varying the number of UEs, denoted as U, from 30 to 100. The average user throughput for this configuration is presented in Fig. <ref>, where it is evident that our proposed approach consistently outperforms the other SM strategies, namely Load Based, UCB, Epsilon Greedy, and Random. However, as the number of UEs increases, there is a noticeable decrease in the average user throughput. This is attributed to the fact that the number of BSs remains constant, resulting in the same resources being shared among an increasing number of UEs. Consequently, the average throughput decreases. §.§ Effect of Increased Action Space on Performance We fixed the number of UEs (U=70), and BSs (N=15), and varied the α_off value. With α_off rising from 0.15 to 0.35, signifying more BSs being put to sleep, the action space expands from 105 to 3003 actions. As the action space increases with the provided contextual information, the NN-based C-MAB approach consistently outperforms other SM strategies (Fig. <ref>). Despite the declining average user rate across all SM strategies except All On, attributed to the reduced resource at each step as more BSs are put to SM, the NN-based C-MAB maintains its superiority. Notably, it demonstrates adaptability by leveraging contextual information, a feature absent in other approaches. All On approach keeps all BSs active, resulting in consistent performance. UCB surpasses Epsilon Greedy in larger action spaces, showcasing its adeptness in handling the exploration-exploitation trade-off. In contrast, Epsilon Greedy performance diminishes due to its simplistic focus on exploitation. Conversely, Random and Load Based strategies show inferior performance with increasing action space. § CONCLUSION In this paper, we study the SMO of mmWave BSs in a UMa 3D propagation environment. Our goal was to achieve high system-wide throughput while reducing the overall network energy consumption. The UEs were randomly placed in the SA at each iteration, simulating the dynamic distribution of users in real-world environments. We also considered interference from other BSs that serve the UEs using beamforming. Due to the dynamic distribution of users leading to an infinite state space, traditional approaches to solving the SMO problem proved to be challenging. Therefore, we addressed the optimization problem using an NN-based CMAB algorithm, an RL framework. The UEs were clustered into their respective TAs, which served as the context. NN was then incorporated to map this context to the action, with the additional aid of the epsilon greedy decay algorithm, facilitating the exploration of the environment. To assess the efficiency of the proposed approach, we extensively compared it with various other SM strategies. Numerical results demonstrated the effectiveness of our proposed approach in terms of normalized EE, average throughput, and the 10^th percentile of the user rate. Furthermore, the numerical results underscore the effectiveness of the NN-based C-MAB approach in handling larger action spaces. This work contributes to ongoing efforts to enhance the sustainability of 5G networks, offering a promising solution for managing real-world mmWave networks. Our future work will analyze the impact of other factors on the performance of the proposed C-MAB SMO approach, such as the availability of reflectors and blockages within the environment. IEEEtran
http://arxiv.org/abs/2405.10307v1
20240516175816
On the lapse contour
[ "Batoul Banihashemi", "Ted Jacobson" ]
hep-th
[ "hep-th", "gr-qc" ]
=1
http://arxiv.org/abs/2405.09056v1
20240515030742
CTS: A Consistency-Based Medical Image Segmentation Model
[ "Kejia Zhang", "Lan Zhang", "Haiwei Pan", "Baolong Yu" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Kejia Zhang et al. Harbin Engineering University The Second Affiliated Hospital of Mudanjiang Medical University zhanglan2015@hrbeu.edu.cn CTS: A Consistency-Based Medical Image Segmentation Model Kejia Zhang1 Lan Zhang* 1 Haiwei Pan 1 Baolong Yu 2 May 20, 2024 ========================================================= In medical image segmentation tasks, diffusion models have shown significant potential. However, mainstream diffusion models suffer from drawbacks such as multiple sampling times and slow prediction results. Recently, consistency models, as a standalone generative network, have resolved this issue. Compared to diffusion models, consistency models can reduce the sampling times to once, not only achieving similar generative effects but also significantly speeding up training and prediction. However, they are not suitable for image segmentation tasks, and their application in the medical imaging field has not yet been explored. Therefore, this paper applies the consistency model to medical image segmentation tasks, designing multi-scale feature signal supervision modes and loss function guidance to achieve model convergence. Experiments have verified that the CTS model can obtain better medical image segmentation results with a single sampling during the test phase. § INTRODUCTION The field of medical image segmentation has always been a hot research direction within the image segmentation domain. Unlike traditional segmentation methods<cit.>, utilizing generative models for image segmentation<cit.> can also achieve good results. Since diffusion models<cit.> are a type of generative model that samples from Gaussian noise, the images they generate possess strong noise resistance and smoothness. Consequently, an increasing number of studies are leveraging diffusion models to tackle the non-generative issues of different images. Researchers use masks as the target for generative model sampling, while also incorporating constraints in the generative models to guide the direction of model generation. However, due to the need for extensive resampling during training and prediction, the issue of low computational efficiency in diffusion models urgently needs to be addressed. Consistency model<cit.> transform multiple samplings into a single sampling by constructing a unique solution by ODE, significantly reducing the time consumed during the sampling process. Moreover, while reducing the number of samplings, consistency models also ensure the effectiveness of the samples. Compared to DDPMs<cit.>, consistency model represent a superior generative paradigm, yet studies applying this model to the field of medical image segmentation are currently lacking. Therefore, this paper proposes constructing a medical image segmentation model based on the consistency model, and designing a loss function according to the segmentation loss and consistency training loss, enabling end-to-end training of the model. CTS code can be obtained in https://github.com/LanHEU/CTShttps://github.com/LanHEU/CTS . The specific contributions of this text are as follows: * A medical image segmentation model based on a consistency model has been constructed, featuring a newly designed joint loss function. * During the decoding phase, multi-scale feature supervision signals are utilized to guide the model's convergence direction. § RELATED WORKS In this section, we briefly describe the existing lines of research relevant to our work. Diffusion models have been applied to many fields, such as sequence modeling <cit.>, speech processing<cit.>, computer vision<cit.> to computed tomography (CT) scanning and magnetic resonance imaging (MRI). In computer vision, to reduce the number of sampling times, many methods have made great efforts. There are also some sampling algorithms tailored for conditional generation, such as without classifier guidance<cit.> or with classifier guidance<cit.>. Image segmentation is an important task in computer vision, which studies simplifying the complexity of an image by decomposing it into multiple meaningful image segments<cit.>. Due to the time, cost, and expertise required<cit.>, the number of images and labels for medical image segmentation is limited. For this reason, diffusion models, by synthesizing labeled data and eliminating the need for pixel-level labeled data, have become a promising method in image segmentation research. BrainSPADE<cit.> proposed a generative model for synthesizing labeled brain MRI images, which can be used to train segmentation models. However, diffusion models in medical image segmentation face issues such as a high number of sampling times and long prediction times. § METHOD This paper aims to fully leverage the advantages of sampling once with a consistency model, while retaining the benefits of the segmentation model. In consistency model<cit.>, the method of directly training a consistency model is referred to as consistency training loss, which is the origin of the 'CT' in the name of 'CTS'. The specific process is shown in Fig<ref>. Similar to the consistency model, the basic framework of this paper includes two parts: model M and target model TM. The model's sampling begins with the mask x^m of each image, inputting the corresponding data x^das a supervisory signal. Initialize the parameters of the two models, and copy the parameters from M to TM. Step 1. The input to the model is the mask x^m, and noise z_n, sampled from a Gaussian distribution at step n, is added to the tensor: x^m_n = x^m + z_n. Step 2. Simultaneously, based on the time period t, obtain: Model average learning moving parameters: c_in,c_out,c_skip Step 3. The final input to model M is: x^m_in = c_in * x^m_n Step 4. Generate multi-scale signals using ⋃ x_i^d,ŷ=h^T(x^d), where i∈(1,2,...) Step 5. Here, the feature signal ⋃ x_i^d is incorporated into the UNet model: y_out^m= g^T(x_in^m,⋃ x_i^d,t) Step 6. The output of the final model M is: y_n^m=c_out∗ y_out^m+ c_skip∗ x_n^m Step 7. Utilize the normal sampling method to obtain the noisez_n+1 of (n+1)th sampling from the Gaussian distribution. x_n+1^m=f_𝒰(x^m,x_n^m,n,n+1) Step 8: Use x_n+1^m as the input for model TM, and repeat the above Step 2-5. And the output is y_n+1^m=c_out g^TM(x_in^m,x^d,t) + c_skip∗ x_n+1^m The consistency training segmentation loss is as follows: ℒ_CT=yn+1m-ynm2 To expedite the convergence speed and training outcomes, training is conducted on structures generated by multi-scale signals: ℒ_S=y-xm2 Overall Loss Function: ℒ_CTS=ℒ_CT+αℒ_S, where α is hyperparameter. Step 9: Update the TM model parameters; the TM model update adheres to the learning rate: θ^TM stopgrad(μ(k)θ^TM+(1-μ(k))θ^M). The pseudocode of the CTS algorithm is shown in Alg<ref>. Multi-scale Feature Supervision Signal. The process of integrating multi-scale feature supervision signals ⋃ x_i^d is shown in Fig<ref>(a). The decoder stage of the image data encoding network progressively generates feature maps of each size, and combines them with the corresponding supervision signals. In the decoder stage of the image data encoding network, a corresponding supervision signal x_i^d is gradually generated for each size feature map. The supervision signal x_i^d, as shown in Fig<ref>(b). This process is integrated into the M model through a channel attention mechanism, achieving the addition of multi-scale supervision signals. During the decoder stage of the image data encoding network, these feature maps contain information of various scales, which can assist the model in better understanding the details and contextual information of the image. To better integrate the supervision signals and feature maps, a channel attention mechanism is employed. This automatically learns the importance weights of each channel, thereby making better use of the information from the supervision signals. § EXPERIMENT This section demonstrates through experimentation the advantages of CTS in medical image segmentation. We started with a thorough comparison of existing alternatives, followed by additional analysis to dissect the reasons behind CTS's success. Datasets. We conducted experiments on medical tasks in two different image modalities: MRI image segmentation of brain tumors, and ultrasound image segmentation of thyroid nodules and liver tumor segmentation on the BraTs-2021 dataset<cit.> as well as in SEHPI datasets<cit.>. This paper utilized anisotropic diffusion filtering<cit.>, while also removing Poisson noise from medical images, preserving more edge information and effective feature structures. Consequently, this further improved the model's performance. Experiment Details. We utilized a 4× UNet. In the testing phase, we employed a single diffusion step for inference, which is significantly smaller than most previous studies. All experiments were implemented using the PyTorch platform and executed on one GTX 4090. All images were uniformly resized to 256×256 pixels. Training was conducted in an end-to-end manner using the AdamW optimizer. batchsize=8. The learning rate was initially set to 1 × 10^-4. Main result. We compared the SOTA segmentation methods proposed for each task with general medical image segmentation methods. The main results are presented in Table 1. Part of the relevant results originates from the work<cit.>. In our experiments, we trained on each dataset for 700,000 iterations, with the specific training duration being one month. CTS-nM indicates that the model did not use multiscale feature supervision signals. CTM-M denotes the use of multiscale feature supervision signals. For a fair comparison, the Meg method incorporates a Fourier filter, while CTM-FM includes the FFTP structure mentioned same with MedSegDiff. Detailed results can be seen in Tab<ref>. Visualization results are shown in Fig<ref>. The results reveal that CTS-nM can surpass most methods, demonstrating that consistency model can not only reduce the number of samplings but also enhance effectiveness. The performance of the CTM-M model, with the addition of multi-scale feature signals, is further improved. CTM-FM also proves that Fourier filtering can further enhance performance. All methods under CTS were tested with a single sampling, averaging 1.9s. This significantly reduces the time compared to other models. Therefore, the CTS model can guarantee model effectiveness while accelerating sampling. In Fig<ref>, the convergence process of the model is shown. The trend of the loss value during training. The changing trends of IoU and Dice metrics for model parameters saved at different times on the test set. It can be observed that, as the training time increases, the loss region during training becomes more leveled, but the test results do not show saturation, with a significant growth margin. This is likely related to the strong learning and representational capabilities of the consistency model. The findings of this paper are in agreement with the conclusions of the consistency studies. Ablation experiment Fig<ref> compares the convergence speeds between CTS-nM and CTS-M. Since models typically enter a smoothing phase after exceeding 10,000 rounds, only the experimental results of 20,000 rounds are compared here. It is evident that the inclusion of multi-scale feature signals significantly accelerates the convergence speed of the models.Tab<ref> compares the ablation results between two datasets. Here, to observe the result, each experiment was trained with 500,000 samples. It is evident that incorporating multi-scale signals yields better results. Additionally, FFTP can also enhance performance. Adding either element individually does not significantly differ in improving the model, possibly because both involve modifications to the way supervisory signals are added, with the difference lying in the method. This further indicates that adding better supervisory signals can greatly impact the model's outcomes. Eventually, the model that incorporated both methods was used to train the overall model. § CONCLUSION AND DISCUSSION This paper first establishes a medical image segmentation method based on a consistency model CTS. It not only yields better results but also significantly reduces prediction time. Moreover, by constructing multi-scale feature supervision signals, the training convergence speed is accelerated. Meanwhile, medical image data processed with anisotropic edge enhancement filters can achieve improved outcomes. However, there are still some shortcomings. In our experiments, due to a lack of GTX 4090, we did not train as many as one million rounds as initially planned in the Consistency<cit.>. Furthermore, we observed that increasing the number of training rounds did not lead to saturation in results, leading us to infer that more training rounds could potentially enhance the model's performance. splncs04
http://arxiv.org/abs/2405.09742v1
20240516005203
Random Scaling and Momentum for Non-smooth Non-convex Optimization
[ "Qinzi Zhang", "Ashok Cutkosky" ]
cs.LG
[ "cs.LG", "math.OC" ]
[ Random Scaling and Momentum for Non-smooth Non-convex Optimization equal* Qinzi Zhangbu Ashok Cutkoskybu buDepartment of Electrical and Computer Engineering, Boston University, Boston, USA Qinzi Zhangqinziz@bu.edu Ashok Cutkoskyashok@cutkosky.com optimization, non-convex, non-smooth, momentum 0.3in ] Training neural networks requires optimizing a loss function that may be highly irregular, and in particular neither convex nor smooth. Popular training algorithms are based on stochastic gradient descent with momentum (SGDM), for which classical analysis applies only if the loss is either convex or smooth. We show that a very small modification to SGDM closes this gap: simply scale the update at each time point by an exponentially distributed random scalar. The resulting algorithm achieves optimal convergence guarantees. Intriguingly, this result is not derived by a specific analysis of SGDM: instead, it falls naturally out of a more general framework for converting online convex optimization algorithms to non-convex optimization algorithms. § INTRODUCTION Non-convex optimization algorithms are one of the fundamental tools in modern machine learning, as training neural network models requires optimizing a non-convex loss function. This paper provides a new theoretical framework for building such algorithms. The simplest application of this framework almost exactly recapitulates the standard algorithm used in practice: stochastic gradient descent with momentum (SGDM). The goal of any optimization algorithm used to train a neural network is to minimize a potentially non-convex objective function. Formally, given F:^d→, the problem is to solve min_∈^d F() = _z[f(,z)], where f is a stochastic estimator of F. In practice, denotes the parameters of a neural network model, and z denotes the data point. Following the majority of the literature, we focus on first-order stochastic optimization algorithms, which can only access to the stochastic gradient ∇ f(,z) as an estimate of the unknown true gradient ∇ F(). We measure the “cost” of an algorithm by counting the number of stochastic gradient evaluations it requires to achieve some desired convergence guarantee. We will frequently refer to this count as the number of “iterations” employed by the algorithm. When the objective function is non-convex, finding a global minimum can be intractable. To navigate this complexity, many prior works have imposed various smoothness assumptions on the objective. These include, but not limited to, first-order smoothness <cit.>, second-order smoothness <cit.>, and mean-square smoothness <cit.>. Instead of finding the global minimum, the smoothness conditions allow us to find an ϵ-stationary point of F such that ∇ F()≤ϵ. The optimal rates for smooth non-convex optimization are now well-understood. When the objective is smooth, stochastic gradient descent (SGD) requires O(ϵ^-4) iterations to find ϵ-stationary point, matching the optimal rate <cit.>. When F is second-order smooth, a variant of SGD augmented with occasional random perturbations achieves the optimal rate O(ϵ^-7/2) <cit.>. Moreover, when F is mean-square smooth, variance-reduction algorithms, such as SPIDER <cit.> and SNVRG <cit.>, achieve the optimal rate O(ϵ^-3) <cit.>. All of these algorithms can be viewed as variants of SGD. In addition to these theoretical optimality results, SGD and its variants are also incredibly effective in practice across a wide variety of deep learning tasks. Among these variants, the family of momentum algorithms have become particularly popular <cit.>. Under smoothness conditions, the momentum algorithms also achieve the same optimal rates. However, modern deep learning models frequently incorporate a range of non-smooth architectures, including elements like ReLU, max pooling, and quantization layers. These components result in a non-smooth optimization objective, violating the fundamental assumption of a vast majority of prior works. Non-smooth optimization is fundamentally more difficult than its smooth counterpart, as in the worst-case <cit.> show that it is actually impossible to find a neighborhood around ϵ-stationary. This underscores the need for a tractable convergence criterion in non-smooth non-convex optimization. One line of research in non-smooth non-convex optimization studies weakly-convex objectives <cit.>, with a focus on finding ϵ-stationary points of the Moreau envelope of the objectives. It has been demonstrated that various algorithms, including the proximal subgradient method and SGDM, can achieve the optimal rate of O(ϵ^-4) for finding an ϵ-stationary point of the Moreau envelope. However, it is important to note that the assumption of weak convexity is crucial for the convergence notion involving the Moreau envelope. Our interest lies in solving non-smooth non-convex optimization without relying on the weak convexity assumption. To this end, <cit.> proposed employing Goldstein stationary points <cit.> as a convergence target in non-smooth non-convex (and non-weakly-convex) optimization. This approach has been widely accepted by recent works studying non-smooth optimization <cit.>. Formally, is a (δ,ϵ)-Goldstein stationary point if there exists a random vector such that []=, -≤δ almost surely, and [∇ F()]≤ϵ.[To be consistent with our proposed definition, we choose to present the definition of (δ,ϵ)-Goldstein stationary point involving a random vector . This presentation is equivalent to the original definition proposed by <cit.>.] The best-possible rate for finding a (δ,ϵ)-Goldstein stationary point is O(δ^-1ϵ^-3) iterations. This rate was only recently achieved by <cit.>, who developed an “online-to-non-convex conversion” (O2NC) technique that converts online convex optimization (OCO) algorithms to non-smooth non-convex stochastic optimization algorithms. Building on this background, we will relax the definition of stationarity and extend this O2NC technique to eventually develop a convergence analysis of SGDM in the non-smooth and non-convex setting. §.§ Our Contribution In this paper, we introduce a new notion of stationarity for non-smooth non-convex objectives. Our notion is a natural relaxation of the Goldstein stationary point, but will allow for more flexible algorithm design. Intuitively, the problem with the Goldstein stationary point is that to verify that a point is a stationary point, one must evaluate the gradient many times inside a ball of some small radius δ about . This means that algorithms finding such points usually make fairly conservative updates to sufficiently explore this ball: in essence, they work by verifying each iterate is not close to a stationary point before moving on to the next iterate. Algorithms used in practice do not typically behave this way, and our relaxed definition will not require us to employ such behavior. Using our new criterion, we propose a general framework, “”, that converts OCO algorithms to non-smooth optimization algorithms. This framework is an extension of the O2NC technique of <cit.> that distinguishes itself through two significant improvements. Firstly, the original O2NC method requires the OCO algorithm to constrain all of its iterates to a small ball of radius roughly δϵ^2. This approach is designed to ensure that the parameters within any period of ϵ^-2 iterations remain inside a ball of radius δ. The algorithm then uses these ϵ^-2 gradient evaluations inside a ball of radius δ to check if the current iterate is a stationary point (i.e., if the average gradient has norm less than ϵ). Our new criterion, however, obviates the need for such explicit constraints, intuitively allowing our algorithms to make larger updates when far from a stationary point. Secondly, O2NC does not evaluate gradients at the actual iterates. Instead, gradients are evaluated at an intermediate variable _n lying between the two iterates _n and _n+1. This conflicts with essentially all practical algorithms, and moreover imposes an extra memory burden. In contrast, our algorithm evaluates gradients exactly at each iterate, which simplifies implementation and improves space complexity. Armed with this improved framework, we proposed an unconstrained variant of online gradient descent, which is derived from the family of online mirror descent with composite loss. When applied within this algorithm, our framework produces an algorithm that is exactly equal to stochastic gradient descent with momentum (SGDM), subject to an additional random scaling on the update. Notably, it also achieves the optimal rate under our new criterion. To summarize, this paper has the following contributions: * We introduce a relaxed convergence criterion for non-smooth optimization that recovers all useful properties of Goldstein stationary point. * We propose a modified online-to-non-convex conversion framework that does not require intermediate states. * We apply our new conversion to the most standard OCO algorithm: “online gradient descent”. The resulting method achieves optimal convergence guarantees as is almost exactly the same as the standard SGDM algorithm. The only difference is that the updates of SGDM are now scaled by an exponential random variable. This is especially remarkable because the machinery that we employ does not particularly resemble SGDM until it is finally all put together. § PRELIMINARIES Notations Bold font denotes a vector in ^d and denotes its Euclidean norm. We define B_d(,r)={∈^d:-≤ r} and sometimes drop the subscript d when the context is clear. We use [n] as an abbreviation for {1,2,…,n}. We adopt the standard big-O notation, and f≲ g denotes f=O(g). (S) denotes the set of all distributions over a measurable set S. Stochastic Optimization Given a function F:^d→, F is G-Lipschitz if |F()-F()| ≤ G-, ∀,. Equivalently, when F is differentiable, F is G-Lipschitz if ∇ F()≤ G, ∀. F is H-smooth if F is differentiable and ∇ F is H-Lipschitz; F is ρ-second-order-smooth if F is twice differentiable and ∇^2 F is ρ-Lipschitz. We assume that our objective function F:^d→ is differentiable and G Lipschitz, and given an initial point _0, F(_0)-inf F() ≤ F^* for some known F^*. We also assume the stochastic gradient satisfies [∇ f(,z) | ]=∇ F(), ∇ F()-∇ f(,z)^2 ≤σ^2 for all ,z. Finally, we assume that F is well-behaved in the sense of <cit.>: for any points and , F()-F() = ∫_0^1 ⟨∇ F(+t(-)), -x⟩ dt. Online Learning An online convex optimization (OCO) algorithm is an iterative algorithm that uses the following procedure: in each iteration n, the algorithm plays an action Δ_n and then receives a convex loss function ℓ_n The goal is to minimize the regret w.r.t. some comparator $̆, defined as _n()̆ := _t=1^n ℓ_t(Δ_t) - ℓ_t()̆. The most basic OCO algorithm is online gradient descent:Δ_n+1 = Δ_n - η∇ℓ_n(Δ_n), which guarantees_N()̆ =O(√(N))for appropriateη. Notably, in OCO we make no assumptions about the dynamics ofℓ_n. They need not be stochastic, and could even be adversarially generated. We will be making use of algorithms that obtain anytime regret bounds. That is, for allnand any sequence of_̆1,_̆2,…, it is possible to bound_n(_̆n)by some appropriate quantities (that may be function ofn). This is no great burden: almost all online convex optimization algorithms have anytime regret bounds. For readers interested in more details, please refer to <cit.>. §.§ Non-smooth Optimization SupposeFis differentiable.is anϵ-stationary point ofFif∇F()≤ϵ. This definition is the standard criterion for smooth non-convex optimization. For non-smooth non-convex optimization, the standard criterion is the following:is an(δ,ϵ)-Goldstein stationary point ofFif there existsS⊂^dandP∈(S)such that∼Psatisfies[]=,-≤δalmost surely, and[∇F()]≤ϵ.[The original definition of (δ,ϵ) Goldstein stationary point proposed by <cit.> does not require the condition []=. However, as shown in <cit.>, this condition allows us to reduce a Goldstein stationary point to an ϵ-stationary point when the loss is second-order smooth. Hence we also keep this condition.] Next, we formally define(c,ϵ)-stationary point, our proposed new criterion for non-smooth optimization. Suppose F:^d→ is differentiable, is a (c,ϵ)-stationary point of F if ∇ F()_c ≤ϵ, where ∇ F()_c = inf_S⊂^d ∼ P∈(S) []=[∇ F()] + c ·-^2. In other words, ifis a(c,ϵ)-stationary point, then there existsS⊂^d, P∈(S)such that∼Psatisfies[]=,-^2 ≤ϵ/c, and[∇F()]≤ϵ. To see how this definition is related to the previous(ϵ,δ)-Goldstein stationary point definition, consider the case whenc=ϵ/δ^2. Then this new definition of(c,ϵ)-stationary point is almost identical to(δ,ϵ)-Goldstein stationary point, except that it relaxes the constraint from-≤δto-^2≤δ^2. To further motivate this definition, we demonstrate that(c,ϵ)-stationary point retains desirable properties of Goldstein stationary points. Firstly, the following result shows that, similar to Goldstein stationary points,(c,ϵ)-stationary points can also be reduced to first-order stationary points with proper choices ofcwhen the objective is smooth or second-order smooth. lemmaCriterionReduction Suppose F is H-smooth. If ∇ F()_c≤ϵ where c=H^2ϵ^-1, then ∇ F()≤ 2ϵ. Suppose F is ρ-second-order-smooth. If ∇ F()_c≤ϵ where c=ρ/2, then ∇ F()≤ 2ϵ. As an immediate implication, suppose an algorithm achievesO(c^1/2ϵ^-7/2)rate for finding a(c,ϵ)-stationary point. Then Lemma <ref> implies that, withc=O(ϵ^-1), the algorithm automatically achieves the optimal rate ofO(ϵ^-4)for smooth objectives <cit.>. Similarly, withc=O(1), it achieves the optimal rate ofO(ϵ^-7/2)for second-order smooth objectives <cit.>. Secondly, we show in the following lemma that(c,ϵ)-stationary points can also be reduced to Goldstein stationary points when the objective is Lipschitz. lemmaGoldsteinReduction Suppose F is G-Lipschitz. For any c,ϵ,δ>0, a (c,ϵ)-stationary point is also a (δ,ϵ')-Goldstein stationary point where ϵ' = (1+2G/cδ^2)ϵ. §.§ Online-to-non-convex Conversion Since our algorithm is an extension of the online-to-non-convex conversion (O2NC) technique proposed by <cit.>, we briefly review the original O2NC algorithm. The pseudocode is outlined in Algorithm <ref>, with minor adjustments in notations for consistency with our presentation. At its essence, O2NC shifts the challenge of optimizing a non-convex and non-smooth objective into minimizing regret. The intuition is as follows. By adding a uniform perturbations_n∈[0,1],⟨∇f(_n-1+s_nΔ_n,z_n), Δ_n⟩= ⟨_n,Δ_n⟩is an unbiased estimator ofF(_n)-F(_n-1), effectively capturing the “training progress”. Consequently, by minimizing the regret, which is equivalent to minimizing∑_n=1^N⟨_n,Δ_n⟩, the algorithm automatically identifies the most effective update stepΔ_n. §.§ Paper Organization In Section <ref>, we present the general online-to-non-convex framework, . We first explain the intuitions behind the algorithm design, and then we provide the convergence analysis in Section <ref>. In Section <ref>, we provide an explicit instantiation of our framework, and see that the resulting algorithm is essentially the standard SGDM. In Section <ref>, we present a lower bound for finding(c,ϵ)-stationary point. In Section <ref>, we present empirical evaluations. § EXPONENTIATED ONLINE-TO-NON-CONVEX In this section, we present our improved online-to-non-convex framework, , and explain the key techniques we employed to improve the algorithm. The pseudocode is presented in Algorithm <ref>. Random Scaling One notable feature of Algorithm <ref> is that the updateΔ_nis scaled by an exponential random variables_n. Formally, we have the following result: lemmaExponentialScaling Let s∼Exp(λ) for some λ>0, then _s[F(+sΔ) - F()] = _s[⟨∇ F(+sΔ), Δ⟩] / λ. In Algorihtm <ref>, we sets_n∼Exp(1)and then define_n = _n-1 + s_nΔ_n. Thus, Lemma <ref> implies that [F(_n) - F(_n-1)] = ⟨∇ F(_n), Δ_n⟩ = ⟨∇ F(_n), _n-_n-1⟩. In other words, we can estimate the “training progress”F(_n)-F(_n-1)by directly computing the stochastic gradient at iterate_n. By exploiting favorable properties of the exponential distribution, we dispense with the need for the “auxiliary point”_n employed by O2NC. We'd like to highlight the significance of this result. The vast majority of smooth non-convex optimization analysis depends on the assumption thatF()is locally linear, namelyF(_n)-F(_n-1) ≈⟨∇F(_n), _n-_n-1⟩. Under various smoothness assumptions, the error in this approximation can be controlled via bounds on the remainder in a Taylor series. For example, whenFis smooth, thenF(_n) - F(_n-1) = ⟨F(_n),_n-_n-1⟩+ O(_n-_n-1^2). However, since smoothness is necessary for bounding Taylor approximation error, such analysis technique is inapplicable in non-smooth optimization. In contrast, by scaling an exponential random variable to the update, we directly establish a linear equation that[F(_n)-F(_n-1)]=⟨∇F(_n),_n-_n-1⟩, effectively eliminating any additional error that Taylor approximation might incur. A randomized approach such as ours is also recommended in the recent findings by <cit.>, who demonstrated that randomization is necessary for achieving a dimension-free rate in non-smooth optimization. In particular, any deterministic algorithm suffers an additional dimension dependence ofΩ(d). Although employing exponential random scaling might seem unconventional, we justify this scaling by noting thats_n∼(1)satisfies[s_n]=1and{s_n ≥t} = exp(-t)(in particular,{s_n ≤5} ≥0.99). In other words, with high probability, the scaling factor behaves like a constant scaling on the update. To corroborate the efficacy of random scaling, we have conducted a series of empirical tests, the details of which are discussed in Section <ref>. Exponentiated and Regularized Losses The most significant feature of Exponentiated O2NC (and from which it derives its name) is the loss function:ℓ_n(Δ) = ⟨β^-n_n, Δ⟩+ _n(Δ). This loss consists of two parts: intuitively, the exponentially upweighted linear loss⟨β^-n_n,Δ⟩measures the “training progress”F(_n)-F(_n-1)(as discussed in Lemma <ref>), and_n(Δ)serves as an stabilizer that prevents the iterates from irregular behaviors. We will elaborate the role of each component later. To illustrate how Exponential O2NC works, let_̆nbe the optimal choice ofΔ_nin hindsight. Then by minimizing the regret_n(_̆n)w.r.t._̆n, Algorithm <ref> automatically chooses the best possible updateΔ_nthat is closest to_̆n. Exponentially Weighted Gradients For now, set aside the regularizer_nand focus on the linear term⟨β^-n_n,Δ⟩. To provide an intuition why we upweight the gradient by an exponential factorβ^-n, we provide a brief overview for the convergence analysis of our algorithm. For simplicity of illustration, we assume_n = ∇F(_n)and_n=0. LetS_n={_t}_t=1^nand let_nbe distributed overS_nsuch that{_n=_t} = p_n,t := β^n-t·1-β/1-β^n. Our strategy will be to show that this setS_nand random variable_nsatisfy the conditions to make_na(c,ϵ)stationary point where_nis defined in Algorithm <ref>. To start, note that this distribution satisfies_n=[_n]. Next, since there is always non-zero probability that_n=_1, it's not possible to obtain a deterministic bound of_n-_n≤δfor some smallδ(as would be required if we were trying to establish(δ,ϵ)Goldstein stationarity). However, since_nis exponentially more likely to be a later iterate (close to_n) than an early iterate (far from_n), intuitively_n-_n^2should not be too big. Formalizing this intuition forms a substantial part of our analysis. In the convergence analysis, we will showis a(c,ϵ)-stationary point by bounding∇F(_n)_c(defined in Definition <ref>) for alln. The condition[_n]=_nis already satisfied by construction of_n, and it remains to bound the expected gradient[∇F(_n)]and the variance_n-_n^2. While the regularizer_nis imposed to control the variance, the exponentiated gradients is employed to bound the expected gradient. In particular, this is achieved by reducing the difficult task of minimizing the expected gradient of a non-smooth non-convex objective to a relatively easier (and very heavily studied) one: minimizing the regret w.r.t. exponentiated lossesℓ_t(Δ) = ⟨β^-t_t,Δ⟩. To elaborate further, let's consider a simplified illustration as follows. Recall thatp_n,t = β^n-t·1-β/1-β^n. By construction of_n, [∇ F(_n)] = _t=1^n p_n,t∇ F(_t). Next, for eachn∈[N], we define _̆n = -D∑_t=1^n p_n,t∇ F(_t)/∑_t=1^n p_n,t∇ F(_t) for someDto be specified later. As a remark,_̆nminimizes⟨[∇F(_n)], Δ⟩over all possibleΔsuch thatΔ=D, therefore representing the optimal update in iteratenthat leads to the fastest convergence. With_̆ndefined in (<ref>), it follows that 1/D∑_t=1^n p_n,t⟨∇ F(_t), -_̆n⟩ = ∑_t=1^n p_n,t∇ F(_t) = [∇ F(_n)]. Recall that we assume_t=∇F(_t)for simplicity. Moreover, in later convergence analysis, we will carefully show that∑_n=1^N∑_t=1^n p_n,t ⟨∇F(_t),-Δ_t⟩≲1-β(see Appendix <ref>). Consequently, 1/N∑_n=1^N ∇ F(_n) = 1/DN∑_n=1^N ∑_t=1^n p_n,t⟨∇ F(_t), Δ_t-_̆n⟩ - 1/DN∑_n=1^N ∑_t=1^n p_n,t⟨∇ F(_t),Δ_t⟩ ≲1-β/DN(1+∑_n=1^N β^n _n(_̆n)). Here_n(_̆n) = ∑_t=1^n ⟨β^-t_t,Δ_t-_̆n⟩denotes the regret w.r.t. the exponentiated lossesℓ_t(Δ) = ⟨β^-t_t,Δ⟩fort=1,…,n(assuming_n=0) and comparator_̆ndefined in (<ref>). Notably, the expected gradient is now bounded by the weighted average of the sequence of static regrets,_n(_̆n). Consequently, a good OCO algorithm that effectively minimizes the regret is closely aligned with our goal of minimizing the expected gradient. Variance Regularization As aforementioned, we impose the regularizer_n(Δ) = μ_n/2Δ^2to control the variance_n-_n^2. Formally, the following result establishes a reduction from bounding variance to bounding the norm ofΔ_t, thus motivating the choice of the regularizer. lemmaVarianceToRegularizer For any β∈(0,1), _s∑_n=1^N __n_n-_n^2 ≤∑_n=1^N 12/(1-β)^2Δ_n^2. This suggests that boundingΔ_n^2is sufficient to bound the variance of_n. Therefore, we impose the regularizer_n(Δ) = μ_n/2Δ^2, for some constantμ_nto be determined later, to ensure thatΔ_n^2remains small, effectively controlling the variance of_n. Furthermore, we'd like to highlight that Lemma <ref> provides a strictly better bound on the variance of_ncompared to the possible maximum deviationmax_n-_n. For illustration, assumeΔ_t's are orthonormal, thenmax_N-_N ≈_1-_N = O(N). On the other hand, Lemma <ref> implies that forn∼([N]),_n[(_n)] = O(1/(1-β)^2). In particular, we will show that1-β= N^-1/2when the objective is smooth. Consequently,_n-_n = O(√(N)), which is strictly tighter than the deterministic bound ofmax_N-_N = O(N). This further motivates why we choose this specific distribution for_n: the algorithm does not need to be conservative all the time and can occasionally make relatively large step, breaking the deterministic constraint that_n-_n≤δ, while still satisfying(_n)≤δ^2. §.§ Convergence Analysis Now we present the main convergence theorem of Algorithm <ref>. This is a very general theorem, and we will prove the convergence bound of a more specific algorithm (Theorem <ref>) based on this result. A more formally stated version of this theorem and its proof can be found in Appendix <ref>. Follow Assumption <ref>. Let _n(_̆n) denote the regret w.r.t. ℓ_t(Δ)=⟨β^-t_t,Δ⟩ + _t(Δ) for t=1,…,n and comparator _̆n defined in (<ref>). Define _t(Δ) = μ_t/2Δ^2, μ_t = 24cD/α^2β^-t, and α=1-β, then ∇ F()_c ≲F^*/DN + G+σ/α N + σ√(α) + c D^2/α^2 + β^N+1_N(_̆N) + α∑_n=1^N β^n _n(_̆n)/DN. Here the second line denotes the weighted average of the sequence of static regrets,_n(_n), w.r.t. the exponentiated and regularized lossℓ_t(Δ) = ⟨β^-t_t,Δ⟩and comparator_̆ndefined in (<ref>), as we discussed earlier. To see an immediate implication of Theorem <ref>, assume the average regret is no larger than the terms in the first line. Then by proper tuningD=1/√(α) Nandα=max{1/N^2/3, c^2/7/N^4/7}, we have∇F()_c ≲1/N^1/3 + c^1/7/N^2/7. § RECOVERING SGDM: WITH OMD In the previous sections, we have shown that Exponentiated O2NC can convert any OCO algorithm into a non-convex optimization algorithm in such a way that small regret bounds transform into convergence guarantees. So, the natural next step is to instantiate with some particular OCO algorithm. In this section we carry out this task and discover that the resulting method not only achieves optimal convergence guarantees, but is also nearly identical to the standard SGDM optimization algorithm! The OCO algorithm we will use to instantiate Exponentiated O2NC is a simple variant of “online mirror descent” (OMD) <cit.>, which a standard OCO algorithm. However, typical OMD analysis involves clipping the outputsΔ_nto lie in some pre-specified constraint set. We instead employ a minor modification to the standard algorithm to obviate the need for such clipping. Inspired by <cit.>, we choose our OCO algorithm from the family of Online Mirror Descent (OMD) with composite loss. Given a sequence of gradients_t:=β^-t _tand convex functionsψ_t(Δ), _t(Δ), ϕ_t(Δ), OMD with composite loss definesΔ_t+1as: _Δ⟨_t, Δ⟩ + D_ψ_t(Δ,Δ_t) + _t+1(Δ) + ϕ_t(Δ)_composite loss. HereD_ψ_tdenotes the Bregman divergence ofψ_t, and_t+1(Δ)+ϕ_t(Δ)denotes the composite loss. The composite loss consists of two components. Firstly,_t+1(Δ)=μ_t+1/2Δ^2controls the variance of_n, as discussed in Section <ref>. Secondly, OMD is known to struggle under unconstrained domain setting <cit.>, but this can be fixed with proper regularization, as done in <cit.> (implicitly), and <cit.> (explicitly). Following a similar approach, we setϕ_t(Δ)=(1/η_t+1-1/η_t)Δ^2to prevent the norm ofΔfrom being too large. Withψ_t(Δ) = 1/2η_tΔ^2where0<η_t+1≤η_t, Theorem <ref> provides a regret bound for this specific OCO algorithm. Let Δ_1=0 and Δ_t+1 = _Δ⟨_t,Δ⟩ + 1/2η_tΔ-Δ_t^2 + μ_t+1/2Δ^2 + (1/η_t+1-1/η_t)Δ^2. Then ∑_t=1^n ⟨_t, Δ_t-⟩̆+ _t(Δ_t) - _t()̆ ≤(2/η_n+1 + μ_n+1/2)^2 + 1/2∑_t=1^n η_t_t^2. Note that the implicit OMD update described in Theorem <ref> can be explicitly represented as follows: Δ_t+1 = Δ_t-η_t_t/1+η_tμ_t+1+η_t(1/η_t+1-1/η_t). When_t=0(implyingμ_t=0), the update formula in (<ref>) simplifies to an approximation of online gradient descent<cit.>, albeit with an additional scaling. §.§ Reduction of Upon substituting_t = β^-t_twhere_t=∇f(_t,z_t), Theorem <ref> provides a regret bound for_n(_̆n)in the convergence bound in Theorem <ref>. Consequently, we can bound∇F()_cfor with the unconstrained variant of OMD as the OCO subroutine (with update formula described in (<ref>)). Formally, we have the following result: theoremSGDM Follow Assumption <ref> and consider any c>0. Let Δ_1=0 and update Δ_t by Δ_t+1 = Δ_t - η_tβ^-t_t/1+η_tμ_t+1+η_t(1/η_t+1-1/η_t). Let μ_t=β^-tμ, η_t=β^tη, β=1-α, μ=24F^*c/(G+σ)α^5/2N, η=2F^*/(G+σ)^2N, α=max{N^-2/3, (F^*)^4/7c^2/7/(G+σ)^6/7N^4/7}. Then for N large enough such that α≤1/2, ∇ F()_c ≲G+σ/N^1/3 + (F^*)^2/7(G+σ)^4/7c^1/7/N^2/7. As an immediate implication, upon solving∇F()_c ≤ϵforN, we conclude that Algorithm <ref> instantiated with unconstrained OGD finds(c,ϵ)-stationary point withinN=O(max{(G+σ)^3ϵ^-3, F^*(G+σ)^2c^1/2ϵ^-7/2})iterations. Moreover, in Section <ref> we will show that this rate is optimal. Furthermore, as discussed in Section <ref>, withc=O(ϵ^-1), this algorithm achieves the optimal rate ofO(ϵ^-4)whenFis smooth; withc=O(1), this algorithm also achieves the optimal rate ofO(ϵ^-7/2)whenFis second-order smooth. Remarkably, these optimal rates automatically follows from the reduction from(c,ϵ)-stationary point toϵ-stationary point (see Lemma <ref>), and neither the algorithm nor the analysis is modified to achieve these rates. §.§ Unraveling the update to discover SGDM Furthermore, upon substituting the definition ofη_t,μ_t(and neglecting constantsG,σ,F^*), the update in Theorem <ref> can be rewritten as Δ_t+1 = Δ_t - η_t/1 + 1/β(ημ + α) LetΔ_t = -βη/ημ+α_t, then we can rewrite the update of with OGD as follows: _t+1 = β/1+ημ_t + α + ημ/1+ημ_t, _t+1 = _t - s_n+1·βη/ημ+α_t+1. Remarkably, this update formula recovers the standard SGDM update, with the slight modification of an additional exponential random variables_n+1: letβ̃= β/1+ημ, which denotes the effective momentum constant, and letη̃= βη/ημ+αbe the effective learning rate, then (<ref>) becomes _t+1 = β̃_t + (1-β̃)_t, _t+1 = _t - s_t+1·η̃_t+1. Smooth case As discussed earlier, whenFis smooth, we setc=O(ϵ^-1)to recover the optimal rateN=O(ϵ^-4). This impliesc=O(N^1/4). Consequently, we can check the parameters defined in Theorem <ref> have orderα= O(N^-1/2),η= O(N^-1), andμ=O(N^1/2)(note thatημ≈α). Therefore, the effective momentum constant is roughlyβ̃≈1-1/√(N), and the effective learning rate is roughlyη̃≈1/√(N). Interestingly, these values align with prior works <cit.>. Second-order smooth case WhenFis second-order smooth and we setc=O(1), we can check thatα= O(N^-4/7),η=O(N^-1), andμ=O(N^3/7)(againημ≈α). Consequently, the effective momentum should be set toβ̃≈1-1/N^4/7and the effective learning rate should beη̃≈1/N^3/7. It is interesting to note that in both smooth and second-order smooth cases,(1-β̃)η̃≈1/N. § LOWER BOUNDS FOR FINDING (C,Ε)-STATIONARY POINTS In this section we leverage Lemma <ref> to build a lower bound for finding(c,ϵ)-stationary points. Inuitively, Lemma <ref> suggests thatO(c^1/2ϵ^-7/2)is the optimal rate for finding(c,ϵ)-stationary point. We can indeed prove its optimality using the lower bound construction by <cit.> and <cit.>. Specifically, <cit.> proved the following result: For any constantsH,F^*,σ,ϵ, there exists objectiveFand stochastic gradient estimator∇fsuch that (i)FisH-smooth,F(_0)-infF() ≤F^*, and∇F() - ∇f(,z)^2 ≤σ^2; and (ii) any randomized algorithm using∇frequiresO(F^*σ^2Hϵ^-4)iterations to find anϵ-stationary point ofF. As a caveat, such construction does not ensure thatFis Lipschitz. Fortunately, <cit.> extended the lower bound construction so that the same lower holds andFis in addition√(F^*H)-Lipschitz. Consequently, for anyF^*,G,c,ϵ, defineH=√(cϵ)andσ=Gand assume√(F^*H)≤G. Then by the lower bound construction, there existsFandsuch thatFisH-smooth,G-Lipschitz,F(_0)-infF() ≤F^*, and∇F()-∇f(,z)^2 ≤G^2. Lipschitzness and variance bound together imply∇f(,z)^2 ≤2G^2. Moreover, finding anϵ-stationary ofFrequiresΩ(F^*σ^2Hϵ^-4) = Ω(F^*G^2c^1/2ϵ^-7/2)iterations (sinceσ=G,H=√(cϵ)). Finally, note thatH=√(cϵ)satisfiesc=H^2ϵ^-1. Therefore by Lemma <ref>, a(c, ϵ)-stationary point ofFis also anϵ-stationary ofF, implying that finding(c,ϵ)-stationary requires at leastΩ(F^*G^2c^1/2ϵ^-7/2)iterations as well. Putting these together, we have the following result: For any F^*, c, ϵ and G≥√(F^*)(cϵ)^1/4, there exists objective F and stochastic gradient ∇ f such that (i) F is G-Lipschitz, F(_0)-inf F() ≤ F^*, and ∇ f(,z)^2 ≤ 2G^2; and (ii) any randomized algorithm using ∇ f requires Ω(F^*G^2c^1/2ϵ^-7/2) iterations to find (c,ϵ)-stationary point of F. § EXPERIMENTS In the preceding sections, we theoretically demonstrated that scaling the learning rate by an exponential random variables_nallows SGDM to satisfy convergence guarantees for non-smooth non-convex optimization. To validate this finding empirically, we implemented the SGDM algorithm with random scaling and assessed its performance against the standard SGDM optimizer without random scaling. Our evaluation involved the ResNet-18 model <cit.> on the CIFAR-10 image classification benchmark <cit.>. For the hyperparameters, we configured the learning rate at0.01, the momentum constant at0.9, and the weight decay at5 ×10^-4. These settings are optimized for training the ResNet model on the CIFAR-10 dataset using SGDM. We use the same hyperparameters for our modified SGDM with random scaling. For each optimizer, we ran the experiment three times under the same setting to minimize variability. We recorded the train loss, train accuracy, test loss, and test accuracy (refer to Figure <ref>). We also recorded the performance of the best iterate, e.g., the lowest train/test loss and the highest train/test accuracy, in each trial (see Table <ref>). These results show that the performance of SGDM with random scaling aligns closely with that of standard SGDM. § CONCLUSION We introduced(c,ϵ)-stationary point, a relaxed definition of Goldstein stationary point, as a new notion of convergence criterion in non-smooth non-convex stochastic optimization. Furthermore, we proposed Exponentiated O2NC, a modified online-to-non-convex framework, by setting exponential random variable as scaling factor and adopting exponentiated and regularized loss. When applied with unconstrained online gradient descent, this framework produces an algorithm that recovers standard SGDM with random scaling and finds(c,ϵ)-stationary point withinO(c^1/2ϵ^-7/2)iterations. Notably, the algorithm automatically achieves the optimal rate ofO(ϵ^-4)for smooth objectives andO(ϵ^-7/2)for second-order smooth objectives. One interesting open problem is designing an adaptive algorithm with our Exponentiated O2NC framework. Since our framework, when applied with the simplest OCO algorithm online gradient descent, yields SGDM, a natural question emerges: what if we replace online gradient descent with an adaptive online learning algorithm, such as AdaGrad? Ideally, applied with AdaGrad as the OCO subroutine and with proper tuning, Exponentiated O2NC could recover Adam's update mechanism. However, the convergence analysis for this scenario is complex and demands a nuanced approach, especially considering the intricacies associated with the adaptive learning rate. In this vein, concurrent work by <cit.> applies a similar concept of online-to-non-convex conversion and connects the Adam algorithm to a principled online learning family known as Follow-The-Regularized-Leader (FTRL). icml2024 § PROOFS IN SECTION <REF> §.§ Proof of Lemma <ref> * Suppose ∇ F()_c ≤ϵ, then there exists P∈(S), y∼ P such that []=, ∇ F()≤ϵ and c-^2 ≤ϵ. Assume F is H-smooth. By Jensen's inequality, -≤√(ϵ/c) = ϵ/H with c=H^2ϵ^-1. Consequently, ∇ F() ≤∇ F() + [∇ F() - ∇ F()] ≤∇ F() + ∇ F() - ∇ F()Jensen's inequality ≤∇ F() + H-smoothness ≤ϵ + H ·ϵ/H = 2ϵ. Next, assume F is ρ-second-order smooth. By Taylor approximation, there exists some such that ∇ F() = ∇ F() + ∇^2F()(-) + 1/2(-)^T∇^3F()(-). Note that [∇^2F()(-)]=∇^2F()[-]=0. Consequently, ∇ F() ≤∇ F() + [∇ F()-∇ F()] ≤∇ F() + 12(-)^T∇^3F()(-)Jensen's inequality ≤∇ F() + ρ2-^2 second-order-smooth ≤ϵ + ρ2·ϵ/c = 2ϵ. c=ρ/2 Together these prove the reduction from a (c,ϵ)-stationary point to an ϵ-stationary point. §.§ Proof of Lemma <ref> * By definition of (c,ϵ)-stationary, there exists some distribution of such that []=, σ^2:=-^2 ≤ϵ/c, and ∇ F()≤ϵ. By Chebyshev’s inequality, {-≥δ} = {-[]≥δ/σ·σ} ≤{-[]≥δ/√(ϵ/c)·σ}≤ϵ/cδ^2. Next, we can construct a clipped random vector of such that = if -<δ, -≤δ almost surely, and []=. In particular, note that {}≤{-≥δ}≤ϵ/cδ^2. Since F is G-Lipschitz, [∇ F() - ∇ F()] = {}[∇ F()-∇ F() | ] ≤ 2G·{}≤ 2G ·ϵ/cδ^2. Consequently [∇ F()]≤[∇ F()] + [∇ F() - ∇ F()]≤ϵ +2Gϵ/cδ^2. This proves that x is also a (δ, ϵ + 2Gϵ/cδ^2)-Goldstein stationary point. § PROOFS IN SECTION <REF> §.§ Proof of Lemma <ref> The proof consists of two composite lemmas. Recall the following notations:S_n = {_t}_t∈[n],_n∼P_nwhereP_n(_t) = β^n-t·1-β/1-β^n, and_n = ∑_t=1^n β^n-t_t·1-β/1-β^n. Also note two useful change of summation identities: ∑_n=1^N∑_t=1^n = ∑_1≤ t≤ n≤ N = ∑_t=1^N∑_n=t^N, ∑_i=1^n∑_i'=1^i-1∑_t=i'+1^i = ∑_1≤ i'<t≤ i≤ n = ∑_t=1^n∑_i=t^n∑_i'=1^t-1. __n,s_n-_n^2 ≤∑_t=1^n λ_n,tΔ_t^2, where λ_n,t = 4∑_i=t^n∑_i'=1^t-1 p_n,ip_n,i'(i-i'), p_n,i = P_n(_i) = β^n-i·1-β/1-β^n. By distribution of _n, we have __n_n - _n^2 = ∑_i=1^n p_n,i_i - _n^2 = ∑_i=1^n p_n,i∑_i'=1^n p_n,i'(_i-_i') ^2 ≤∑_i=1^n∑_i'=1^n p_n,ip_n,i'_i-_i'^2 = 2∑_i=1^n∑_i'=1^i-1 p_n,ip_n,i'_i-_i'^2. The inequality uses convexity of ·^2. Next, upon unrolling the recursive update _t = _t-1 + _tΔ_t, _i-_i'^2 = ∑_t=i'+1^i s_tΔ_t^2 ≤ (i-i') ∑_t=i'+1^i s_t^2Δ_t^2. Note that s_t and Δ_t are independent and s_t∼(1), so _s[s_t^2Δ_t^2] = _s[s_t^2] Δ_t^2 = 2Δ_t^p. Consequently, upon substituting this back and applying change of summation, we have __n,s_n-_n^2 ≤ 4∑_i=1^n∑_i'=1^i-1∑_t=i'+1^i p_n,ip_n,i'(i-i') Δ_t^2 = ∑_t=1^n (4∑_i=t^n∑_i'=1^t-1p_n,ip_n,i'(i-i')) Δ_t^2. We then conclude the proof by substituting the definition of λ_n,t. Define λ_n,t as in (<ref>), then ∑_n=t^N λ_n,t≤12/(1-β)^2. In the first part of the proof, we find a good upper bound of λ_n,t. We can rearrange the definition of λ_n,t as follows. λ_n,t = 4(1-β/1-β^n)^2 ∑_i=t^n∑_i'=1^t-1β^n-iβ^n-i'(i-i') let j=i-i' = 4(1-β/1-β^n)^2 ∑_i=t^n∑_j=i-t+1^i-1β^n-iβ^n-i+j· j let k=n-i = 4(1-β/1-β^n)^2 ∑_k=0^n-tβ^2k∑_j=n-k-t+1^n-k-1 jβ^j. The second line uses change of variable that j=i-i', and the third line uses k=n-i. Next, ∑_j=n-k-t+1^n-k-1 jβ^j = β∑_j=n-k-t+1^n-k-1d/dββ^j = β·d/dβ( ∑_j=n-k-t+1^n-k-1β^j ) = β·d/dβ( β^n-k-t+1 - β^n-k/1-β) = β^a-k+1-β^b-k+1/(1-β)^2 + (a-k)β^a-k - (b-k)β^b-k/1-β, where a=n-t+1, b=n. Upon substituting this back into (<ref>), we have λ_n,t = 4(1-β/1-β^n)^2 ∑_k=0^n-tβ^2k( β^a-k+1 - β^b-k+1/(1-β)^2 + aβ^a-k-bβ^b-k/1-β - k β^a-k-β^b-k/1-β) = 4(1-β/1-β^n)^2 ∑_k=0^n-t( β^a+1 - β^b+1/(1-β)^2 + aβ^a-bβ^b/1-β) β^k - β^a-β^b/1-β· kβ^k. For the first term, ∑_k=0^n-tβ^k = 1-β^n-t+1/1-β = 1-β^a/1-β. For the second term, ∑_k=0^n-t kβ^k = β·d/dβ( ∑_k=0^n-tβ^k ) = β·d/dβ( 1-β^a/1-β) = β-β^a+1/(1-β)^2 - aβ^a/1-β. Upon substituting this back into (<ref>) and simplifying the expression, we have λ_n,t = 4(1-β/1-β^n)^2 ·[ ( β^a+1 - β^b+1/(1-β)^2 + aβ^a-bβ^b/1-β) ·1-β^a/1-β - β^a-β^b/1-β·(β-β^a+1/(1-β)^2 - aβ^a/1-β) ] = 4(aβ^a-bβ^b)(1-β^a) + aβ^a(β^a-β^b)/(1-β^n)^2 = … = 4aβ^a(1-β^b) - bβ^b(1-β^a)/(1-β^n)^2. Upon substituting a=n-t+1 and b=n, we conclude the first half of the proof with λ_n,t≤ 4aβ^a(1-β^b)/(1-β^n)^2≤ 4·(n-t+1)β^n-t+1/1-β^n. In the second part, we use this inequality to bound ∑_n=t^Nλ_n,t. Define K=⌈1/1-β⌉, then ∑_n=t^N λ_n,t = _{t≤ K-1}·∑_n=t^K-1λ_n,t + ∑_n=max{t,K}^N λ_n,t. For the first summation in (<ref>), for all t≤ n≤ K-1, we have λ_n,t ≤ 4·(n-t+1)β^n-t+1/1-β^n(i)≤ 4·(n-t+1)β^n-t+1/1-β^n-t+1(ii)≤ 4·1·β^1/1-β^1≤4/1-β. (i) holds because 1/1-β^n is decreasing w.r.t. n. (ii) holds because f(x)=xβ^x/1-β^x is decreasing for x≥ 0 and β∈(0,1), so f(n-t+1) ≤ f(1) since n-t+1 ≥ 1. Recall that K-1≤1/1-β, then the first summation in (<ref>) can be bounded by _{t≤ K-1}·∑_n=t^K-1λ_n,t ≤∑_n=1^K-14/1-β≤4/(1-β)^2. For the second summation in (<ref>), for all n≥ K ≥1/1-β, 1/1-β^n(i)≤1/1-β^1/1-β(ii)≤lim_x→ 11/1-x^1/1-x = e/e-1≤ 2. (i) holds because 1/1-β^n is decreasing. (ii) holds because f(x)=1/1-x^1/1-x is increasing for x≥ 0, so f(β) ≤lim_x→ 1f(x) for all β∈(0,1). Consequently, the second summation in (<ref>) can be bounded by ∑_n=max{t,K}^N λ_n,t≤∑_n=max{t,K}^N 4·(n-t+1)β^n-t+1/1-β^n≤ 8∑_n=t^N (n-t+1)β^n-t+1 = 8∑_n=1^N-t nβ^n By change of summation, ∑_n=1^N nβ^n = ∑_n=1^N ∑_i=1^n β^n = ∑_i=1^N ∑_n=i^N β^n ≤∑_i=1^N β^i/1-β≤1/(1-β)^2. We then conclude the proof by substituting (<ref>), (<ref>) into (<ref>). * By Proposition <ref> and Proposition <ref>, we have _s∑_n=1^N__n_n-_n^2 (i)≤∑_n=1^N∑_t=1^n λ_n,tΔ_t^2 (ii)=∑_t=1^N(∑_n=t^N λ_n,t) Δ_t^2 (iii)≤∑_t=1^N 12/(1-β)^2Δ_t^2. Here (i) is from Proposition <ref>, (ii) is from change of summation, and (iii) is from Proposition <ref>. §.§ Proof of Lemma <ref> * Denote p(s)=λexp(-λ s) as the pdf of s. Upon expanding the expectation, we can rewrite the LHS as _s [F(+sΔ) - F()] = ∫_0^∞ [F(+sΔ) - F()] p(s) ds (i)=∫_0^∞( ∫_0^s ⟨∇ F(+tΔ), Δ⟩ dt ) p(s) ds = ∫_0^∞∫_0^∞⟨∇ F(+tΔ), Δ⟩{t≤ s}p(s) dtds = ∫_0^∞(∫_t^∞ p(s) ds) ⟨∇ F(+tΔ),Δ⟩ dt (ii)=∫_0^∞p(t)/λ⟨∇ F(+tΔ),Δ⟩ dt = 1/λ_s [⟨∇ F(+sΔ), Δ⟩]. Here the (i) applies fundamental theorem of calculus on g(s)=F(+sΔ)-F() with g'(s)=⟨∇ F(+sΔ),Δ⟩ and (ii) uses the following identity for exponential distribution: ∫_t^∞ p(s) ds = exp(-λ t) = p(t)/λ. § PROOF OF THEOREM <REF> We restate the formal version of Theorem <ref> as follows. Recall thatS_n = {_t}_t∈[n],_n∼P_nwhereP_n(_t) = β^n-t·1-β/1-β^n, and_n = ∑_t=1^n β^n-t_t·1-β/1-β^n. Suppose F is G-Lipschitz, F(_0)-inf F() ≤ F^*, and the stochastic gradients satisfy [∇ f(,z) | ] = ∇ F() and ∇ F()-∇ f(,z)^2≤σ^2 for all ,z. Define the comparator _̆n and the regret _n()̆ of the regularized losses ℓ_t as follows: _̆n = -D ·∑_t=1^n β^n-t∇ F(_t)/∑_t=1^n β^n-t∇ F(_t), _n()̆ = ∑_t=1^n ⟨β^-t_t, Δ_t - ⟩̆+ _t(Δ_t) - _t()̆. Also define the regularizor as _t() = μ_t/2^2 where μ_t = μβ^-t, μ=24cD/α^2 and α=1-β. Then ∇ F()_c ≤F^*/DN + 2G+σ/α N + σ√(α) + 12c D^2/α^2 + 1/DN( β^N+1_N(_̆N) + α∑_n=1^N β^n _n(_̆n). ). We start with the change of summation. Note that ∑_n=1^N ∑_t=1^n β^n-t(1-β) (F(_t) - F(_t-1)) = ∑_t=1^N (∑_n=t^N β^n-t) (1-β) (F(_t) - F(_t-1)) = ∑_t=1^N (1-β^N-t+1) (F(_t) - F(_t-1)) = F(_N) - F(_0) - ∑_t=1^N β^N-t+1(F(_t)-F(_t-1)). Upon rearranging and applying the assumption that F(_0)-F(_N) ≤ F(_0)-inf F() ≤ F^*, we have -F^* ≤∑_n=1^N∑_t=1^n β^n-t(1-β)(F(_t)-F(_t-1)) + ∑_t=1^N β^N-t+1 (F(_t) - F(_t-1)). First, we bound the first summation in (<ref>). Denote _t as the σ-algebra of _t. Note that Δ_t∈_t and z_t∉_t, so by the assumption that [∇ f(,z) | ] = ∇ F(), [_t | _t] = [∇ f(_t, z_t) | _t] = ∇ F(_t) ⟨∇ F(_t),Δ_t⟩ = ⟨_t,Δ_t⟩. By Lemma <ref>, [F(_t)-F(_t-1)] = ⟨∇ F(_t), Δ_t⟩. Upon adding and subtracting, we have [F(_t) - F(_t-1)] = ⟨∇ F(_t) - _t + _t, Δ_t - _̆n + _̆n ⟩ = [ ⟨∇ F(_t), _̆n⟩⟩ + ⟨∇ F(_t)-_t, -_̆n⟩ + ⟨_t,Δ_t-_̆n⟩]. Consequently, the first summation in (<ref>) can be written as ∑_n=1^N∑_t=1^n β^n-t(1-β) ( ⟨∇ F(_t),_̆n⟩ + ⟨∇ F(_t)-_t, -_̆n⟩ + ⟨_t,Δ_t-_̆n⟩). For the first term, upon substituting the definition of _̆n, we have ∑_t=1^n β^n-t(1-β) ⟨∇ F(_t), _̆n⟩ = (1-β) ⟨∑_t=1^n β^n-t∇ F(_t), -D ∑_t=1^n β^n-t∇ F(_t)/∑_t=1^n β^n-t∇ F(_t)⟩ = (1-β^n)· -D∑_t=1^n β^n-t∇ F(_t)/∑_t=1^n β^n-t = -D (1-β^n) __n∇ F(_n) Since ∇ F(_t)≤ G for all t, __n∇ F(_n)≤ G as well. Therefore, we have ≤ -D__n∇ F(_n) + DGβ^n. Since β<1, ∑_n=1^N β^n ≤1/1-β. Therefore, upon summing over n, the first term in (<ref>) becomes ∑_n=1^N∑_t=1^n β^n-t(1-β) ⟨∇ F(_t), _̆n⟩≤(-D∑_n=1^N __n∇ F(_n)) + DG/1-β. For the second term, by Cauchy-Schwarz inequality, ∑_t=1^n β^n-t⟨∇ F(_t)-_t, -_̆n⟩ ≤√(∑_t=1^n β^n-t(∇ F(_t)-_t) ^2 _̆n^2). Since [∇ F(_t)-_t | _t] = 0, by martingale identity and the assumption that ∇ F()-∇ f(,z)^2 ≤σ^2, ∑_t=1^n β^n-t(∇ F(_t)-_t) ^2 = ∑_t=1^n β^n-t(∇ F(_t)-_t)^2 ≤∑_t=1^n σ^2 β^2(n-t)≤σ^2/1-β^2. Upon substituting _̆n=D and 1/1-β^2≤1/1-β, the second term in (<ref>) becomes ∑_n=1^N∑_t=1^n β^n-t(1-β) ⟨∇ F(_t) - _t, -_̆n⟩ ≤∑_n=1^N (1-β) ·σ D/√(1-β^2)≤σ DN√(1-β). For the third term, upon adding and subtracting _t and substituting the definition of _n()̆, we have ∑_n=1^N∑_t=1^n β^n-t(1-β) ⟨_t,Δ_t-_̆n⟩ = ∑_n=1^N∑_t=1^n (1-β)β^n ( ⟨β^-t_t,Δ_t-_̆n⟩ + _t(Δ_t) - _t(_̆n) - _t(Δ_t)+_t(_̆n) ) = ∑_n=1^N (1-β)β^n _n(_̆n) + ∑_n=1^N∑_t=1^n (1-β)β^n ( -_t(Δ_t) + _t(_̆n)). Upon substituting (<ref>), (<ref>) and (<ref>) into (<ref>), the first summation in (<ref>) becomes ∑_n=1^N∑_t=1^n β^n-t(1-β)(F(_t)-F(_t-1)) ≤(-D∑_n=1^N __n∇ F(_n)) + DG/1-β + σ DN√(1-β) + ∑_n=1^N (1-β)β^n _n(_̆n) + ∑_n=1^N∑_t=1^n (1-β)β^n ( -_t(Δ_t) + _t(_̆n)). Next, we consider the second summation in (<ref>). Since _t≤∇ F(_t) + ∇ F(_t)-_t≤ G+σ and ⟨∇ F(_t),Δ_t⟩ = ⟨_t,Δ_t⟩, we have [F(_t) - F(_t-1)] = ⟨∇ F(_t),Δ_t⟩ = ⟨_t, Δ_t - _̆N⟩ + ⟨_t, _̆N⟩ ≤⟨_t,Δ_t-_̆N⟩ + D(G+σ). Following the same argument in (<ref>) by adding and subtracting _t, the second summation becomes ∑_t=1^N β^N-t+1 (F(_t) - F(_t-1)) = ∑_t=1^N β^N+1⟨β^-t_t,Δ_t-_̆N⟩ + β^N-t+1 D(G+σ) ≤β^N+1_N(_̆N) + D(G+σ)/1-β + ∑_t=1^N β^N+1 (-_t(Δ_t) + _t(_̆N)). Combining (<ref>) and (<ref>) into (<ref>) gives -F^* ≤(-D∑_n=1^N __n∇ F(_n)) + DG/1-β + σ DN√(1-β) + ∑_n=1^N (1-β)β^n _n(_̆n) + ∑_n=1^N∑_t=1^n (1-β)β^n ( -_t(Δ_t) + _t(_̆N)) + β^N+1_N(_̆N) + D(G+σ)/1-β + ∑_t=1^N β^N+1 (-_t(Δ_t) + _t(_̆N)). As the final step, we simplify the terms involving _t. Recall that _t() = μ_t/2^2, so _t(_̆n)=μ_t/2D^2 is independent of n. Hence, by change of summation, ∑_n=1^N∑_t=1^n (1-β)β^n ( -_t(Δ_t) + _t(_̆n)) + ∑_t=1^N β^N+1 (-_t(Δ_t) + _t(_̆N)) = ∑_t=1^N (∑_n=t^N β^n) (1-β)_=β^t - β^N+1( -μ_t/2Δ_t^2 + μ_t/2D^2) + ∑_t=1^N β^N+1( -μ_t/2Δ_t^2 + μ_t/2D^2) = ∑_t=1^N β^t ( -μ_t/2Δ_t^2 + μ_t/2D^2) Recall Lemma <ref> that ∑_n=1^N __n_n-_n^2 ≤∑_t=1^N12/(1-β)^2Δ_t^2. Upon substituting μ_t=24cD^2/(1-β)^2β^-t, we have = ∑_t=1^N ( -12cD/(1-β)^2Δ_t^2 + 12c D^3/(1-β)^2) ≤(-cD ∑_n=1^N__n_n-_n^2) + 12cD^3N/(1-β)^2. Substituting this back into (<ref>) with α=1-β, we have -F^* ≤ - D [ ∑_n=1^N __n∇ F(_n) + c·__n_n-_n^2 ] + DG/α + σ DN√(α) + D(G+σ)/α + 12cD^3N/α^2 + β^N+1_N(_̆N) + α∑_n=1^N β^n _n(_̆n). By definition of ∇ F(·)_c defined in Definition <ref>, ∇ F(_n)_c ≤__n∇ F(_n) + c·__n_n-_n^2. Moreover, since is uniform over _n, ∇ F()_2,c = 1/N∑_n=1^N ∇ F(_n)_2,c We then conclude the proof by rearranging the equation and dividing both sides by DN. § PROOFS IN SECTION <REF> §.§ Proof of Theorem <ref> Only in this subsection, to be more consistent with the notations in online learning literature, we usefor weights instead ofΔas we used in the main text. To prove the regret bound, we first provide a one-step inequality of OMD with composite loss. Given a convex and continuously differentiable functionψ, recall the Bregman divergence ofψis defined as D_ψ(,) = ψ() - ψ() - ⟨∇ψ(), -⟩. Note that∇_D_ψ(,) = ∇ψ() - ∇ψ(). Moreover, as proved in <cit.>,D_ψsatisfies the following three-point identity: D_ψ(,) + D_ψ(,) - D_ψ(,) = ⟨∇ψ() - ∇ψ(), -⟩. Let ψ, ϕ be convex, and define _t+1 = _⟨_t,⟩ + D_ψ(,_t) + ϕ(). Then for any $̆, ⟨_t, _t-⟩̆ ≤⟨_t, _t-_t+1⟩ + D_ψ(,̆_t) - D_ψ(,̆_t+1) - D_ψ(_t+1,_t) + ϕ()̆ - ϕ(_t+1). Let f() = ⟨_t,⟩ + D_ψ(,_t) + ϕ(). Since ψ, ϕ are convex, so is f. Therefore, _t+1 = _ f() implies that for all $̆, 0 ≤⟨∇ f(_t+1), -̆_t+1⟩ = ⟨_t + ∇ψ(_t+1) - ∇ψ(_t) + ∇ϕ(_t+1), -̆_t+1⟩ = ⟨_t, -̆_t⟩ + ⟨_t,_t-_t+1⟩ + ⟨∇ψ(_t+1)-∇ψ(_t),-̆_t+1⟩ + ⟨∇ϕ(_t+1),-̆_t+1⟩. Sinceϕis convex,⟨ϕ(_t+1),-̆_t+1⟩≤ϕ()̆-ϕ(_t+1). Moreover, by the three-point identity with=,̆=_t+1, =_t, we have ⟨∇ψ(_t) - ∇ψ(_t+1),-̆_t+1⟩ = D_ψ(,̆_t+1) + D_ψ(_t+1,) - D_ψ(,̆_t). Substituting these back and rearranging the inequality then conclude the proof. We restate the formal version of Theorem <ref> as follows. Given a sequence of {_t}_t=1^∞, a sequence of {η_t}_t=1^∞ such that 0<η_t+1≤η_t, and a sequence of {μ_t}_t=1^∞ such that μ_t≥ 0, let _t()=μ_t/2^2, ϕ_t()=(1/η_t+1-1/η_t)^2, _1=0 and _t updated by _t+1 = _⟨_t,⟩ + 1/2η_t-_t^2 + ϕ_t() + _t+1(). Then for any n∈, ∑_t=1^n ⟨_t, _t-⟩̆+ _t(_t) - _t()̆≤(2/η_n+1 + μ_n+1/2)^2 + 1/2∑_t=1^n η_t_t^2. Denote ψ_t()=1/2η_t^2. Since ψ_t,ϕ_t,_t are all convex and D_ψ_t(,_t) = 1/2η_t-_t^2, Lemma <ref> holds, which gives ⟨_t, _t-⟩̆ ≤⟨_t,_t - _t+1⟩ + D_ψ_t(,̆_t) - D_ψ_t(,̆_t+1) - D_ψ_t(_t+1,_t) + ϕ_t()̆ - ϕ_t(_t+1) + _t+1()̆ - _t+1(_t+1). Equivalently, ⟨_t, _t-⟩̆+ _t(_t) - _t()̆ ≤⟨_t,_t - _t+1⟩ + D_ψ_t(,̆_t) - D_ψ_t(,̆_t+1) - D_ψ_t(_t+1,_t) + ϕ_t()̆ - ϕ_t(_t+1) + _t(_t) - _t+1(_t+1) + _t+1()̆ - _t()̆. By Young's inequality, ⟨_t,_t - _t+1⟩ - D_ψ_t(_t+1,_t) ≤η_t/2_t^2 + 1/2η_t_t+1-_t^2 - 1/2η_t_t+1-_t^2 = η_t/2_t^2. Next, note that D_ψ_t(,̆_t) - D_ψ_t(,̆_t+1) = D_ψ_t(,̆_t) - D_ψ_t+1(,̆_t+1) + D_ψ_t+1(,̆_t+1) - D_ψ_t(,̆_t+1). Since -̆_t+1^2 ≤ 2^2 + 2_t+1^2 and 1/η_t+1-1/η_t≥ 0, D_ψ_t+1(,̆_t+1) - D_ψ_t(,̆_t+1) + ϕ_t()̆-ϕ_t(_t+1) = (1/2η_t+1-1/2η_t)-̆_t+1^2 + (1/η_t+1-1/η_t)(^2-_t+1^2) ≤(2/η_t+1 - 2/η_t)^2. Upon substituting back into (<ref>), we have ⟨_t, _t-⟩̆+ _t(_t) - _t()̆ ≤η_t/2_t^2 + D_ψ_t(,̆_t) - D_ψ_t+1(,̆_t+1) + (2/η_t+1 - 2/η_t)^2 + _t(_t) - _t+1(_t+1) + _t+1()̆ - _t()̆. Upon telescoping this one-step inequality, we have ∑_t=1^n ⟨_t, _t-⟩̆+ _t(_t) - _t()̆ ≤( ∑_t=1^n η_t/2_t^2 ) + D_ψ_1(,̆_1) - D_ψ_n+1(,̆_n+1) + (2/η_n+1 - 2/η_1)^2 + _1(_1) - _n+1(_n+1) + _n+1()̆ - _1()̆. We then conclude the proof by using _1=0, D_ψ_t(,̆)=1/2η_t-̆^2 and _n()=μ_t/2^2 to simplify D_ψ_1(,̆_1) - D_ψ_n+1(,̆_n+1) + (2/η_n+1 - 2/η_1)^2 ≤1/2η_1^2 + (2/η_n+1 - 2/η_1)^2 ≤2/η_n+1^2 and _1(_1) - _n+1(_n+1) + _n+1()̆ - _1()̆≤_n+1()̆ + _1(_1) = μ_n+1/2^2. §.§ Proof of Theorem <ref> * First, define D=F^*/(G+σ)√(α)N, μ=24cD/α^2 and η=2D√(α)/G+σ. Note that these definitions are equivalent to μ=24F^*c/(G+D)α^5/2N and η=2F^*/(G+σ)^2N as defined in the theorem. Next, note that both Theorem <ref> and Theorem <ref> hold since the explicit update of Δ_t+1 is equivalent to Δ_t+1 = _Δ⟨β^-t_t, Δ⟩ + 1/2η_tΔ-Δ_t^2 + (1/η_t+1-1/η_t)Δ^2 + μ_t+1/2Δ^2. Also recall that _n(_̆n) = ∑_t=1^n ⟨β^-t_t,Δ_t-_̆n⟩ + _t(Δ_t) - _t(_̆n). Therefore, upon substituting _t=β^-t_t, η_t=β^tη, μ_t=β^-tμ and _̆n=D into Theorem <ref>, we have _n(_̆n) ≤(2/η_n+1+μ_n+1/2)^2 + 1/2∑_t=1^n η_t _t^2 = (2/η+μ/2)D^2β^-(n+1) + η/2∑_t=1^n β^-t_t^2. By Assumption <ref>, _t^2 = ∇ F(_t)^2 + ∇ F(_t)-_t^2 ≤ G^2+σ^2. Moreover, ∑_t=1^nβ^-t≤β^-n/1-β. Therefore, β^n+1_n(_̆n) ≤(2/η+μ/2)D^2 + η(G^2+σ^2)/2αUpon substituting η=2D√(α)/G+σ (note that G^2+σ^2/G+σ≤ G+σ) and μ=24cD/α^2, we have ≤2D(G+σ)/√(α) + 12cD^3/α^2. Consequently, with α≤1/2 (so that β^-1≤ 2), we have 1/DN(β^N+1_N(_̆N) + α∑_n=1^N β^n _n(_̆n)) ≤1+2α N/DN( 2D(G+σ)/√(α) + 12cD^3/α^2) ≲G+σ/N + cD^2/α^2 N + (G+σ)√(α) + cD^2/α. Upon substituting this into the convergence guarantee in Theorem <ref>, we have ∇ F()_c ≤F^*/DN + 2G+σ/α N + σ√(α) + 12c D^2/α^2 + 1/DN( β^N+1_N(_̆N) + α∑_n=1^N β^n _n(_̆n) ) ≲F^*/DN + G+σ/α N + (G+σ)√(α) + cD^2/α^2 With D=F^*/(G+σ)√(α)N and α=max{N^-2/3, (F^*)^4/7c^2/7/(G+σ)^6/7N^4/7}, we have ≲G+σ/α N + (G+σ)√(α) + (F^*)^2c/(G+σ)^2α^3N^2≲G+σ/N^1/3 + (F^*)^2/7(G+σ)^4/7c^1/7/N^2/7.
http://arxiv.org/abs/2405.08944v1
20240514201722
Challenges in Deploying Long-Context Transformers: A Theoretical Peak Performance Analysis
[ "Yao Fu" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL", "cs.DC" ]
Hydrodynamical simulations of proton ingestion flashes in Type I X-ray Bursts Andrew Cumming May 20, 2024 ============================================================================= Transformer-based long context generative models power emerging AI applications like hour-long video understanding and project-level coding agent. Deploying long context transformers (e.g., 100K to 10M tokens) is prohibitively expensive compared to short context (e.g., 4K tokens) model variants. Reducing the cost of long-context transformers is becoming a pressing research and engineering challenge starting from the year of 2024. This work describes a concurrent programming framework for quantitatively analyzing the efficiency challenges in serving multiple long-context requests under limited size of GPU high-bandwidth memory (HBM) regime. We give a detailed analysis of how all additional computational costs, compared to 4K context, trace back to one single source: the large size of the KV cache. We use a 34B GPT-3.5 level model of 50K context on A100 NVLink as a running example, and describe how its large KV cache causes four types of deployment challenges: (1) prefilling long inputs takes much longer compute time and GPU memory than short inputs; (2) after prefilling, the large KV cache residing on the GPU HBM substantially restricts the number of concurrent users being served; (3) during decoding, repeatedly reading the KV cache from HBM to SM largely increases latency; (4) when KV cache memory overflows, swapping it from HBM to DDR causes significant context switching latency. We use this framework to analyze existing works and identify possibilities of combining them to build end-to-end systems. Overall, this work offers a foundational framework for analyzing long context transformer deployment and identifies directions towards reducing the inference cost of 1M context to be as cheap as 4K. § INTRODUCTION Suppose one has successfully turned an open-source GPT-3.5 to GPT-4 level language models (e.g., LLaMA 3 <cit.>, DeepSeek <cit.>, QWen <cit.> and Mixtral <cit.>) into a long-context variant (e.g., through continual pretraining <cit.> and instruction tuning <cit.>) then wants to deploy this model for a wide spectrum of applications like repository-level coding and hour-long video understanding. These applications typically require the input-context to be of 100K to 10M tokens where the paradigm changes substantially from the 4K short-context regime. We are interested in an ambitious goal: How to reduce the deployment of 1M context production-level transformers to be as cheap as 4K? Why serving long-context transformers is expensive? Most of the cost overhead trace back to one single source: the size of the KV cache. Consider a 30+B 100K context GPT-3.5 quality open-source models like QWen or Yi, the differences between KV cache for 4K v.s. 100K context is: 100K context: 100000×60×8×128×2×2 bytes = 22.8 4K context: 4000×60×8×128×2×2 bytes = 0.91 Here we use the Yi-34B 200K <cit.> configuration (60 layers, 8 kv heads and 128 hidden dimension). Suppose we use 2 × 80G A100 tensor parallelism to serve this model in bf16, then we have 2×80 - 34×2 = 122 GB spare space for storing the KV cache. From this first glance, we immediately realize that under this setting, we can achieve about 100+ users concurrency of 4K context, but only 5 users of 100K context. If we were able to deploy 1M context models as cheap as 4K, we can substantially democratize long-context and multimodal generative models and foster an emerging application ecosystem. This work gives a full-stack quantitative analysis of challenges in deploying long-context transformers, from hardware to system, from model architecture to user preference. We first describe a concurrent programming framework (Fig <ref>) for serving multiple long-context user requests under limited GPU HBM size. We define session-based throughput, the rounds of user interactions in a given period, as the end-to-end evaluation, and decompose it into four key metrics: the level of concurrency, the prefilling latency, the decoding latency, and the context switching overhead. We discuss how these four metrics are bounded by the size of HBM, the GPU flops, the HBM bandwidth, and the PCIE bandwidth respectively, and how these challenges eventually trace back to the size of the KV cache, leading to the core research problem about lossless compression of the KV cache. We further discuss how hardware architectures change the performance factors, how existing works compress the KV cache from the layer, head, token, and hidden state dimension, and how the relative cost between prefilling and decoding changes. We hope this work serves as the fundamental framework towards reducing the inference cost of 1M context to be as cheap as 4K. § CHALLENGES IN DEPLOYING LONG-CONTEXT TRANSFORMERS Given a production-level long-context transformer, our objective is to reduce the deployment cost of 1 million token context to be as cheap as 4K. We consider 30+B models of 50K context and group query attention (GQA) on 80G A100 NVLink as a running example because they are typically of GPT-3.5 capability, specifically we use Yi-34B 200K configuration because, by the time of writing this paper, it is the only open-source 30+B model supporting long-context. We first describe a concurrent programming framework about the workflow for serving multiple long-context user queries (Fig. <ref>) and identify four key performance metrics: concurrency, prefilling, decoding, and context-switching. We focus on theoretical peak performance analysis. This is to say, we assume we already have an efficient implementation that can squeeze out all hardware performance (which typically requires sophisticated engineering efforts like cuda kernel programming, memory management, model parallelism .etc), and study under an efficient enough implementation, what are the key challenges and limits we should tackle. Although we do not have an actual implementation (whose engineering effort is quite nontrivial, e.g., see vLLM <cit.> and InfiniteLLM <cit.>, thus significantly beyond the scope of our current resource), theoretical peak performance (which is widely used in analyzing LLM serving systems) is an important tool to identify system bottlenecks and provides guidance to further advance efficiency (e.g., most existing serving systems focus on decoding yet we may want to focus more on prefilling when the context is longer than 200K, as we discuss later in Fig. <ref>). §.§ A Concurrent Programming Framework under Limited GPU HBM Size Concurrent User Interaction Sessions and Preferences As is shown in Table <ref> and Fig. <ref>, in a typical interaction session, a user starts from a prompt of a long document followed by a question, and feeds it to the model. The model receives the initial prompt, prefill it to become the KV cache. The user waits for the prefilling stage until the first token starts to generate and prefers the waiting time not to be so long. After prefilling, the model starts autoregressive decoding. The user reads the output simultaneously with the decoding process and prefers the decoding to be faster than the human reading speed. After the model finishes decoding, the user continues to read the response, think, maybe take a sip of coffee, and then start to type the next question. The follow-up prompts are usually not as long as the first prompt because the first prompt typically contains the long context (book or video) while the follow-up prompts are typically question-only. When the first user is reading the model response and thinking about what to ask next, the model is essentially idle, so at the same time if another user asks another question, the model could do context switching by offloading the first user's KV cache to the CPU DDR memory to make HBM space to store the second user's KV cache. The two users ask follow-up questions interactively until their session ends. Session-Based Throughput as an End-to-End Objective We consider the situation where multiple users simultaneously interact with the model. Assume on average, a user session consists of the document/ video of 50K tokens and 5 rounds of questions. After receiving the answer to the previous question, the user spends about 1 minute reading the answer and thinking about the next question. Our objective is to maximize a session-based throughput defined as: = / Note that our session-based throughput objective is different from existing token-based throughput <cit.> (i.e., number of prefilled or decoded tokens in a given period). As we will see soon, token-based throughput is only part of the problem. Our session-based throughput, i.e., the number of concurrent user interactions in a given period, is an end-to-end objective, because we not only consider prefilling and decoding, but also consider memory restrictions and context switching which significantly influence concurrency. Compute v.s. Memory Boundedness, Arithmetic Intensity and Critical Batch Size One important observation of transformer inference is that prefilling is usually bounded by the GPU compute power, i.e., the flops, while decoding is bounded by the HBM memory bandwidth. We say an operator is compute bound if most of the time of finishing this operator is computing it on GPU's streaming multiprocessor (SMs, where GPU performs block-wise parallel computation). We say an operator is memory bound if most of the time for finishing this operator is moving the data from the memory to the SMs (instead of actually computing it on the SMs). Whether an operator is compute or memory bound depends on its arithmetic intensity, defined as how many floating point operation (FLOP) is performed per memory access operation (IO): = / The higher level of parallelism, the higher flop per memory access, the more likely an operator is compute bound, the better we utilize the hardware. On a given GPU, the critical arithmetic intensity, i.e., the level of parallelism, for an operator to change from memory to compute bound is the ratio of its flop / memory bandwidth. For A100 it is: = 312T flop per sec / 2T byte per sec = 156 For transformers, the level of parallelism is approximately how many tokens we feed into it, i.e., the batch size. This is to say, for A100, when our batch size is larger than 156 tokens, e.g., during prefilling, the user's prompt is 50K tokens, we are compute bound and fully utilizing A100's compute power. When our batch size is smaller than 156 tokens, e.g., during decoding we only decode one token at a forward pass, we are memory bound and not fully utilizing A100's compute power. Prefilling Now we analyze how long prefilling on A100 takes exactly. Since prefilling is compute bound, i.e., context length longer than 156 on A100, its theoretical peak latency is = / If compute bound. For a prompt of 50K context length it is (see <cit.> for a detailed derivation) 4.33P Flop = 50 K× (2 ×34B + 2 ×60×50K×4096) 14.1 seconds = 4.33P Flop / 312T Flop per sec Since 14.1 seconds is the theoretical peak, in Fig. <ref> we round it to 20 seconds to account for the implementation overhead. This means the actual implementation may achieve 14.1 / 20 ≈ 70% of the theoretical peak performance, which is a common experience for cuda programming on A100. If the context length is 4K instead of 50K, then repeating the above computation we get the latency 0.89 seconds. The difference here is = 0.89 seconds = 14.1 seconds The 13-second gap, rooted from the additional flop from the long context, is what we eventually want to close. Decoding Now we analyze how long decoding takes exactly. Since decoding is memory bound, i.e., batch size less than 156 on A100, the theoretical peak latency is = / If memory bound. For decoding, one forward pass means = + We assume on average the model generates one screen tokens (typically the user prefers the generation length right about one screen), i.e., about 250 tokens, then the peak latency is 250×(68GB + 11GB) / 2T Bytes per sec = 9.8 seconds Since 9.8 seconds is theoretical peak, in Fig. <ref> we round it to 12 seconds to account for the implementation overhead. If the sequence length is 4K, then its corresponding KV cache is only 0.91GB and the decoding latency reduces to 8.5 seconds. Yet if the sequence length increases to 200K, the KV cache becomes 44GB, the latency increases to 14 seconds. The relative latency increase is correlated with the relative size between the KV cache and the model size, and we eventually want to close it. Concurrency Control and Context Switching Another important consideration is that when the KV cache becomes large, the number of concurrent users' cache that the GPU HBM can hold is = - / This means that concurrency is bounded by the size of the HBM. Continuing with our 34B 50K model example, if we deploy it on one 80GB A100 we can only serve one user (Fig. <ref>). But if the context is 4K, the KV cache is only about 1GB, and we can concurrently serve about 20 users. When the second user comes to ask a question about a long document, to make room for their KV cache, we need to do context switching: offloading the first user's KV cache to the CPU memory, and loading the second user's KV cache (Fig. <ref>). This induces the context switching overhead: = / This is to say, the context switching overhead is bounded by the PCIE bandwidth, i.e., how fast the GPU HBM is connected to the CPU DDR. Suppose we use PCIE gen 4 of 20G bytes per second, then the context switching overhead of 2 users of 50K context is: 1.1 seconds = (11G bytes + 11G bytes) / 20G bytes per sec In Fig. <ref> we round the 1.1 seconds to 2 seconds to account for the engineering overhead. As mentioned earlier, in our setting we can serve 20 users of 4K context length without context switching because the HBM is enough to hold their KV cache. If we like to increase our 50K concurrency to 20, then the overall context switching overhead also increases with concurrency: 22 seconds = 20 ×1.1 seconds Note that this 22 seconds overhead does not exist in the 4K context regime. Summary so far we have discussed most of the details when deploying long-context transformers using the 34B model 50K context as the running example. We see that the overall performance, measured by user interaction throughput, decomposes to four metrics: (1) prefilling latency bounded by the GPU flops; (2) decoding latency bounded by the HBM bandwidth; (3) level of concurrency bounded by the size of the HBM; (4) context switching overhead bounded by the GPU-CPU connection bandwidth, i.e., the PCIE. In the next sections, we will discuss how these metrics change with common factors, and identify the bottleneck eventually trace back to the size of the KV cache. §.§ Factors that Strongly Influence the Performance Metrics We start from two basic factors: context length and hardware architecture. When increased from 4K to 50K, we show the four metrics (prefilling, decoding, concurrency and context switching) changes with different rate (linear, inverse, and quadratic). We further show that tensor parallelism improves concurrency, prefilling and decoding, but does not improve context switching. Sparse upcycling to a larger mixture of experts reduces concurrency, prefilling and decoding, but does not change context switching. The type of attention, i.e., multi-head, multi-query or group query attention, influence performance significantly because they directly influence the size of the KV cache. Context Length In the first row of Fig. <ref>, we compute the theoretical peak performance of the four metrics w.r.t the context length for our Yi 34B running example using the equations in Sec. <ref>. We observe: (1) concurrency inversely decreases with longer context length; (2) prefilling latency quadratically increases with longer context length. In comparison, decoding latency and context switching overhead only linearly increases with longer context length, and the decoding latency is the least influenced factor because 50K context KV cache is still relatively smaller than model parameters (11GB v.s. 68GB). Concurrency and prefilling are the two most severely influenced metrics. Hardware Architecture Can we improve the performance by simply using more advanced hardware? In Fig. <ref> second row, we show the performance improvement tendency with hardware advancements. We observe: (1) concurrency linearly increases with the size of the HBM; (2) prefilling latency inversely reduces with the increased flops when upgrading the device from 4090 to A100 to H100; (3) decoding latency inversely reduces with the increased memory bandwidth; (4) context switching overhead inversely reduces with the increased PCIE bandwidth. Note that numbers are based on the newest advances by May 2024, and even if we use the newest hardware, the cost gap between 50K and 4K are not closing. This is to say, we cannot count on hardware advances for reducing the cost of serving long-context models, and we have to make algorithmic innovations. Tensor Parallelism utilizes multiple devices for accelerating inference with negligible communication overhead. Linearly increasing the number of devices to 2, 4, and 8 introduces more HBM space, thus linearly increasing concurrency. Since we equally divide the computation on multiple devices, the prefilling and decoding latency also reduces inversely with the number of GPUs. However, tensor parallelism cannot reduce the context switching overhead because the PCIE bandwidth between the DDR to the HBM is shared by all devices. Upcycling to Mixture of Experts suppose we upcycle our 34B model to be 8×34B mixture of experts model, how would the metrics change? The first observation is the model size: since MoE models are much larger than dense models, they will take up more HBM spaces thus reducing concurrency. The prefilling and decoding latency changes with the number of activated experts, which is usually 2, thus these two latency will increase approximately by 2 times. Since MoE does not change attention, the size of the KV cache does not change, meaning that the context switching overhead does not change. In summary, upcycling to MoE changes concurrency, prefilling and decoding latency, but does not change context switching. Types of Attention The last but probably so far the most important factor is the type of attention. Specifically, for the 34B Yi model we consider, it uses group-query attention (GQA) with 8 kv heads but 32 query heads. If we increase its kv heads to 32 (i.e., the original multi-head attention, MHA), its corresponding 50K tokens KV cache are: 8 kv heads (GQA) 50000×60×8×128×2×2 bytes = 11.4 32 kv heads (MHA) 50000×60×4×128×2×2 bytes = 45.6 In practice people do not often use multi-query attention (MQA) due to its suboptimal performance. But compared to the original MHA, GQA directly gives 4x KV cache reduction, translating to 4x improvements of concurrency and 4x context-switching. As for decoding latency, we have: GQA Decoding: 250×(68GB + 11GB) / 2T Bytes per sec = 9.8 seconds MHA Decoding: 250×(68GB + 46GB) / 2T Bytes per sec = 14.3 seconds which is about 1.5x decoding latency improvements. Combining the improvements in concurrency, decoding and context switching, it is clear that GQA significantly improves long-context efficiency. §.§ Most Challenges Trace Back to the Size of the KV Cache We have the following important observations when comparing 50K to 4K in Sec. <ref>: (1) to prefill the long input and produce the KV cache, the prefilling latency increases from 0.89 to 14.1 seconds (Eq. <ref>); (2) because the large KV cache residing on the GPU memory, the concurrency reduces from about 20 to 1; (3) during decoding, repeated loading the KV cache increases the latency from 8.5 to 9.8 seconds (Eq. <ref>); (4) the large KV cache induces expensive context-switching overhead, for 20 users it takes about additional 22 seconds (Eq. <ref>). These four factors collectively induces significant cost in terms of the end-to-end session-based throughput. Our eventual goal is to make the deployment of 1M context as cheap as 4K, and 4K tokens are about 1GB KV cache. Then our observations point to one key research problem: How to efficiently compress the 1M token KV cache to 1G bytes in a lossless way? § COMPRESSIBILITY ANALYSIS AND DIRECTIONS OF OPTIMIZATION In this section we discuss the potential dimensions to compress the KV cache. We first note that without any compression, storing 1M token into bytes only takes about 3 - 10MB disk storage (depending on the size of the tokenizer's vocabulary), so 1GB is more than enough to store the full information of the input tokens. The problem is how to make their compressed representations usable by large transformers. We start from analysizing the compressibility from layer, head, token, and hidden dimensions, then we discuss the relative cost between prefilling and decoding. §.§ Lossless Compressibility by Layer, Head, Token and Hidden We first discuss the notion of “lossless”. Practitioners usually test the model on a variety of long-context tasks to check if the compression is lossy, amoung which the Needle-in-a-Haystack test, which asks the model to precisely retrieve the given information put at arbitrary location of a long context, serves as an entry barrier: if a model cannot pass this test, we do not trust it can do harder tasks. Unfortunately, it seems that two important model families, state-space models (e.g., Mamba <cit.>) and linear attention (e.g., LongT5 <cit.>), cannot pass the needle test, so we do not include them into our discussion. Recent work from <cit.> shows that there exists a set of special attention heads responsible for retrieving imporant information from the context. Their discovery indicates that at least for some layer and some head, the full attention over most of the input tokens should be retained. Below we discuss the compressibility of the KV cache from its four dimensions: layer, head, token and hidden. Table <ref> lists some great existing efforts and what metrics they improve. Layer Here the basic hypothesis is that some tasks may not require the full-depth computation <cit.>. Skipping some layers during prefilling could be beneficial to all four metrics because it simultaneously reduces prefilling flops and the size of the KV cache. In fact, the layer dimension may be radically reduced from the results of existing works like <cit.>, and it might be possible to only keep one layer KV cache for long-context tasks, which is a 1/60 compression ratio. Head Here the basic hypothesis is that some heads are specialized for retrieval and long-context related capabilities <cit.>, so it may be possible to retain retrieval heads and prune others. Note that head pruning typically happens after prefilling, meaning that they only improve decoding, concurrency and context-switching, but prefilling remains the same expensive. In general, it seems that at the head dimension is of very high level sparsity, and the number of heads might be possible to radically removed, e.g., <cit.> show the number of strongest retrieval heads is less than 20, which could potentially translate to a 20 / 1024 compression ratio. Token For the token dimension, the basic hypothesis is that if information of a token can be inferred from its context, we can compress this token by either dropping it <cit.> or merging it with its neighbors <cit.>. Most of the compression at the token dimension does not much improve prefilling, but they typically improves concurrency, decoding and context switching. Currently, it seems that the token dimension might not be as sparse as the layer and head dimension because most of the tokens have to be retained for precise retrieval. We have not yet seen any work showing the potential of more than 50% compression ratio on the token dimension. Hidden There is not much work on reducing hidden dimension except for quantization <cit.>, presumably because the hidden size is already 128, too small to be reduce further. Yet it may still be worth trying applying dimension reduction like LoRA <cit.> on the KV cache, particularly given recent progress from DeepSeek V2 <cit.> which introduces LoRA-like idea that effectively reduces KV head. An important caveat here is that many existing works may only emphasize one aspect of the problem. For example, TriForce <cit.> only considers the decoding latency using speculative decoding. It does not make the KV cache smaller and even has a tradeoff of increased GPU memory consumption from the additional KV cache from the draft model. Many existing works are also orthogonal, such that their advantages from different aspects may join force. For example, if we could reduce the KV cache to be only 1 layer or 10 heads and keep only 50% of the tokens, we will have about 1000x performance improvements. This naturall leads to one call for research: Can we integrate existing efforts into an end-to-end system and push full-stack optimization? §.§ Relative Cost Between Prefilling and Decoding To end-to-end optimize a serving system, one important consideration is the relative cost between prefilling and decoding. Figure <ref> shows the relative latency under two model sizes (Yi 34B and Command R+ are open-source GPT3.5 and GPT-4 level, respectively). We see that as the model becomes large and as the context length becomes longer, the cost gradually shifts from decoding to prefilling. Also for tasks that require multiple rounds of user interactions (e.g., coding agent and AI companion), their decoding cost is generally higher than tasks of less rounds of conversation (e.g., document QA). Given that most of the existing inference optimization such as speculative decoding <cit.> and vLLM <cit.> targets the short-context regime, their improvements may not be significant enough for long-context deployment. Below we discuss further optimization directions. Optimizing Prefilling Since prefilling is compute bound, the space for optimization is not actually very large, and mostly reduce to decreased flops. One straightforward flop reduction is local or linear attention (e.g., the sliding-window attention used by Mistral <cit.>), which removes the quadratic attention term in Eq. <ref>. However, Fig. <ref> shows that for context length of less than 50K, the gain of linear attention is quite limited. Eventually, to reduce the prefilling latency one many still fall back to smaller and shallower models, like the practice of YOCO <cit.>. Optimizing Decoding Compared to prefilling, the space for optimizing decoding is much larger, not only because there exist a large level of sparsity within the KV cache as we discussed above, but also because of existing techniques like speculative decoding and their long-context variants like TriForce <cit.>. So in general it might be relatively easier to optimize decoding, but it is important to keep in mind that for large models of long context, it is the prefilling stage that takes most of the time (see Fig. <ref> the Command R+ 200K case). § CONCLUSIONS In this work, we give a detailed analysis of the challenges in deploying long-context transformers. Our eventual objective is to reduce the serving cost of 1M context to be as cheap as 4K, such that we can democratize emerging AI applications like video understanding and generative agents. We describe a concurrent programming framework to illustrate the end-to-end user interaction session based throughput, and decompose it into four key performance metrics: concurrency, prefilling, decoding, and context switching. We discuss how common factors influence the four metrics and how existing works focus on different metrics. We believe there are great research opportunities to integrate existing efforts to build a strong end-to-end long-context serving system and believe that this work can serve as the foundation for full-stack long-context inference optimization. plainnat
http://arxiv.org/abs/2405.09417v1
20240515151242
Study of charged Lepton Flavor Violation in electron muon interactions
[ "Ran Ding", "Jingshu Li", "Meng Lu", "Zhengyun You", "Zijian Wang", "Qiang Li" ]
hep-ex
[ "hep-ex", "hep-ph" ]
Highly Tunable Ru-dimer Molecular Orbital State in 6H-perovskite Ba_3MRu_2O_9 J.P. Clancy May 20, 2024 ============================================================================= § INTRODUCTION Collider experiment serves as a crucial tool for precision measurement of the standard model (SM) and search for new physics beyond the SM (BSM), with the experimental technology constantly being improved and evolved. In the near future, the High-Luminosity Large Hadron Collider (HL-LHC) <cit.>, the Future Circular Collider (FCC) <cit.> or the Circular Electron Positron Collider (CEPC) <cit.> may become important instruments for the next generation high energy frontier research. While recently with the continuous development of muon acceleration technology, the muon collider has also become an increasingly popular consideration. Since it integrates the advantages of electron colliders and hadron colliders, the muon collider may become a golden factory for studying various new physics processes <cit.>. On the other hand, high energy muon beams can also be used to create electron muon collisions. As early as ten to thirty years ago, numerous research efforts have already focused on the potential of electron muon collider <cit.>. In recent years, as the construction of high energy lepton collider gradually became feasible from the engineering standpoint, interest in electron-muon collision has been reignited <cit.>. Recently, a new collider proposal, μTRISTAN, has been proposed based on an ultra-cold muon technology developed for the muon (g-2) experiment at J-PARC <cit.>. It includes a μ ^+ μ ^+ collider and a μ ^+ e^- collider, in which we are interested in the later. The main parameters from the μTRISTAN eμ collider proposal <cit.> are listed in Tab.<ref>. According to several phenomenological study, μTRISTAN may have certain potentials on measurements related to Higgs and new physics searches <cit.>. Meanwhile, utilizing high-density muon beams to strike fixed targets can also provide a possibility to search for new physics. Many such attempts have been proposed, including the Muon Missing Momentum (M^3) proposal at Fermilab <cit.>, and a recent idea of searching for muonic force carriers by using ATLAS detector as a fixed target <cit.>. In this study we also investigate a fixed electron-target experiment with a muon beam in addition to the eμ collider. Since lepton flavor in the initial state is non-zero, electron muon interaction can strongly avoid many potential background processes which would occur at different-sign muon colliders or electron-positron colliders, thus possessing higher sensitivity to new physics signals, typically the charged Lepton Flavor Violation (cLFV) processes. In the SM framework, the cLFV processes are strongly suppressed due to the tiny mass of neutrinos, hence unobservable in the current experiments yet. However, it may be much enhanced in various BSM models, such as super-symmetry (SUSY) <cit.>, leptoquark <cit.>, two-Higgs-doublet <cit.>, and the heavy neutral gauge bosons Z' <cit.> studied in this paper. In the past decades, searches for the cLFV process were performed in different channels with several approaches, typically the high intensity muon-based experiments including μ ^+ → e ^+ γ (MEG) <cit.>, μ ^+ → e^+ e^+ e^- (SINDRUM) <cit.> and μ ^- N → e^- N (SINDRUM 2) <cit.>, as well as the collider-based searches for cLFV decays of Z <cit.>, Higgs <cit.> and several hadron resonances <cit.>. Meanwhile, there will be continuous new experiments conducted in the near future to constantly improve the existing limits, such as MEG2 <cit.>, Mu3e <cit.>, COMET <cit.> and Mu2e <cit.>. In this study, we consider the cLFV processes based on the interactions of electron and muon in two scenarios: asymmetric electron muon collision at the eμ collider and fixed electron-target experiment striking with the muon beam. For the former case, the center-of-mass energy includes the energy point of μTRISTAN and even higher. While for the latter case, we investigate the muon energy around several tens of GeV to test the Z' couplings at the low energy bound. § PHYSICS PROCESSES AND MONTE CARLO SIMULATION §.§ cLFV in Z' model By introducing an additional U(1) gauge symmetry into the SM framework, it will correspond to a neutral gauge boson Z'. Since the heavy neutral gauge bosons are predicted in many BSM models, it may be one of the most motivated extensions of the SM <cit.>. In this study, Z' is considered to have the same coupling and chiral structure as the standard model Z^0, but allows for the lepton flavor violation, similarly as Ref. <cit.>. The coupling strength of the Z' and leptons can be described by a matrix λ as Eq. <ref> λ =([ λ_e e λ_e μ λ_e τ; λ_μ e λ_μμ λ_μτ; λ_τ e λ_τμ λ_ττ ]). Generally, it represents the strength of the cLFV couplings relative to the SM couplings, assuming that the diagonal elements are 1, while the off-diagonal elements are usually a higher order of magnitude. Therefore, the cases that break lepton flavor conservation twice would not be considered in the further study. After introducing the Z' boson, the cLFV processes mentioned in Sec.<ref> can be enhanced by the diagrams shown in Fig.<ref>. The branching ratio limits would be transformed to the coupling λ _ij <cit.> and compared with our results based on eμ interaction. §.§ Physics processes The signal cLFV processes studied in this paper are listed in Tab.<ref>. There are two diagrams for the Z' mediated cLFV process μ ^+ e^- → l^+l^-, as shown in Fig.<ref> (taking μ ^+ e^- → e^+ e^- as an example). In particular, the s-channel is not included in the processes μ ^+ e^- →μ^+ τ^-. In the simulation, all coupling strengths are considered as 1. And the mass of Z' floats from 0.2 GeV to 5 TeV in the electron muon collision experiment, and within 0.50 GeV in the electron-target experiment with a muon beam. While several background processes may occur on the collider and affect the signal that we are interested in, these processes include the standard model backgrounds divided by the number of final state particles and accidental background caused by particle mis-identification. In conclusion, the specific signals and their background processes are shown in Tab.<ref>. §.§ Event generation and simulation Both signal and background events are simulated by MadGraph5_aMC@NLO(MG) <cit.> version 3.1.1, which is one of the key tools for Monte Carlo event generation in high energy physics, then showered and hadronized by Pythia8 <cit.>. Next, Delphes <cit.> version 3.5.1 is utilized to simulate detector effects with the default configuration card for the detector at the muon collider. In the Monte Carlo generation, some preliminary requirements are applied to remove the physically unreasonable events. In eμ collisions, the transverse momentum of charged leptons is required to satisfy p_T > 10 GeV and the absolute pseudo-rapidity of charged leptons |η| > 2.5. While in the muon-beam electron-target simulation, the filtering criteria of p_T and |η| would be relaxed. Then in detector simulation, parameters such as efficiency of particle detection are set according to the Delphes cards. There is some difference between the τ simulation and e/μ. For those final states with τ in Tab.<ref>, although they can go through any decay chains in reality, in this study we only consider the hadronic decay channels (about 60% of the total decay) and reconstruct it by the Jet information. § STATISTICAL ANALYSIS AND SENSITIVITY RESULT §.§ Asymmetric collision In this scenario we consider two kinds of asymmetric collision with electron and anti-muon beam: E_e = 30 GeV and E_μ = 1 TeV (√(s) = 346 GeV), or E_e = 200 GeV and E_μ = 3 TeV (√(s) = 1.55 TeV). The former is based on the proposal of μTRISTAN (the polarization of each beam is not considered), and the latter is a higher energy assumption according to other current beam designs. §.§.§ Background study After setting the preliminary requirements as mentioned in Sec.<ref>, signal candidate events should be with the same charged leptons corresponding to the signal in the final state. Since the initial flavor in eμ collision is non-zero, vast majority of the SM background are forbidden, while the remaining portion also exhibits significant kinematic differences from the signal processes, especially in ee or μμ channel. The invariant mass distributions of the final state dileptons of these two channels are shown in Fig.<ref>. Only considering the interval near the center-of-mass energy, the SM background values are extremely low. And compared with a similar study on the different-signs election or muon collider <cit.>, eμ collision has a cleaner signal window. Due to the low level of the SM background, we also investigated the accidental background caused by eμ mis-identification, where a final state muon is assigned the mass of the electron, or vice versa. Typically it would let the eμ scattering process coming into the background of μ ^+ e^- → e^+ e^- and μ ^+ e^- →μ^+ μ^-. The probability of eμ mis-identification is set as 10^-6. The invariant mass distribution of this process is extremely close to the signal. while for μ ^+ e^- →μ ^+ τ ^- since the reconstruction of τ would inevitably result in a certain loss of energy, there is a considerable overlap in the signal and background distributions. It will be optimized in the next section. §.§.§ Sensitivity result Based on the distributions of the signal and background, we truncate the invariant mass to remove the events with significantly deviating from center-of-mass energy. The specific truncation point in τ channels will be determined by scanning and selecting the maximum value of S/(a/2+√(B)), where S is the number of signal processes, B is the total number of the weighted background and a is the significant value which is considered as 3 <cit.>. The weight is defined by n_x = σ _x L/N, where σ _x is the cross section of each process, L is the luminosity and N is the generated number. Then the binned histograms of leptons p_T distributions are utilized for the statistic analysis. The test statistics Z_i is calculated by Z_i := 2[n_i - b_i +b_i ln(n_i / b_i)] for 90% confidence level (C.L.) exclusion, where i is the index of each bin, n is the weighted number of observed events including signal and background, and b is the weighted number of background <cit.>. Then the total Z=Σ _i Z_i would be subject to a χ ^2 distribution. The degree of freedom is defined by the number of bins. By iteration, we can obtain the signal cross section of 90% C.L. exclusion in Fig.<ref>. Then based on MG calculation we can get the corresponding value of λ _ll'×λ _ SM, as shown in Fig.<ref>. The λ _eμ and λ _eτ results are calculated at two energy points, and the luminosity is considered as 0.3 ab^-1 or 5 ab^-1. Several current limits and prospective limits are also included to compare with our results. The constraint of branching ratio from other experiments are concluded in Tab.<ref>. It is sensible that the limits given by other experiments in τ channels are much more conservative than in eμ channels. But in our collider study, they may be similar since the main influencing factor here is the signal cross section while the cross sections of different signals are quite similar. In the λ _eμ channel, the strictest constraint comes from the results of μ - e conversion, and our results have weaker competitiveness among those high intensity muon-based experiments. While in the λ _eτ channel, current limits of τ→ e e e and τ→ e μμ perform the best across the entire interval, but the orders of magnitude are much lower than the results of λ _eμ. Compared with our results, the constraints of 1.55 TeV eμ collider are more stringent than other existing results when M_Z' > 0.5 TeV, and of 346 GeV collider it is about the entire M_Z' interval. Even compared with the prospective limits of those processes on Belle II, our results still have certain advantages around the resonance region. On the other side, comparing the results of eμ collision and e^+e^- or μ ^+ μ ^- collisions <cit.>, based on the same luminosity, the results of eμ are better than μ ^+ μ ^- by an order of magnitude at their respective resonance points. But it cannot be ignored that the center-of-mass energy of eμ is relatively low, and the results of other experiments are more advantageous in the low-energy region. Not only that, for eμ collider it is difficult to let the luminosity reach the same level as μ ^+ μ ^- collider in reality. Therefore, the advantages of these two collision scenarios still need to be considered based on more practical factors. To be precise that what we are comparing in Fig.<ref> and Fig.<ref> is the coupling λ _eμ (λ _eτ) ×λ _ SM. Then based on the assumption that Z' has the same coupling structures and strengths as the standard model Z^0 as mentioned in Sec.<ref>, that is, λ _ SM = 1, we can naturally obtain the estimation of λ _eμ or λ _eτ. But more strictly speaking, this requirement may not exist in more universal models, and we need to consider the biases of standard model couplings. In this way, each process may give a product of different coupling strength, for example λ _eμ×λ _ee (μ→ eee), and λ _eμ×λ _ll (μ→ eγ) where l represent e or μ. While for the high intensity muon-based experiments, they were unable to measure the coupling λ _eμ×λ _μμ. This coupling requires the existence of a Z'-e-μ vertex and a Z'-μ-μ vertex, which is almost impossible to achieve except for the direct interactions of leptons. On the eμ collider, we can obtain the measurement of this coupling through μ ^+ e^- →μ ^+ μ ^-, as shown in Fig.<ref>, indicating that it has similar sensitivity as μ ^+ e^- → e^+ e^-. §.§ Electron-target experiment with a muon beam Now we will focus on the low-energy region, where we conducte Monte Carlo simulations to investigate muon-target processes involving μ ^+ e^- → e^+ e^- and μ ^+ e^- →μ ^+ μ ^-. It is essential to highlight that the masses of muon and electron cannot be disregarded, imposing a lower limit on the incident muon's beam energy, equivalent to its mass. Additionally, the target energy E_cm=√(2E_μm_e + m_μ^2 +m_e^2) possesses its own lower-energy threshold. Consequently, in our simulations, we vary the M_Z^' in three distinct sets for each process, ensuring that the muon energy is scanned as close to the lower energy limit as feasible. The outcomes are shown in Fig.<ref> and Tab.<ref>. Remarkably, a pronounced resonance in the target energy is observed in proximity to M_Z^'. Considering the processes we are studying, taking μ ^+ e^- →μ ^+ μ ^- as an example, the main background process corresponding here is μ ^+ + e^- →μ ^+ + μ ^- + γ <cit.>. Correspondingly, its relative rate is 10^-8, therefore we can conduct background-free experimental estimation. Due to σ∝ (λ _e μ×λ _ll)^2and event rate R = L ·σ = dN/dt· n_2 · dx ·σ, coupling limits estimates can be made based on the reaction cross-section. As a rough estimate, assuming a 10 cm thick lead target, the incidence rate is about dN/dt∼ 10^6, the electron number density of lead is about n_2 ∼ 10^24, and 1 y∼ 10^7. From this it can be obtained 90% C.L. exclusion lines on the couplings λ_ee and λ_μμ products the diagonal coupling λ_e μ. The current and prospective limits from low-energy experiments are converted to the coupling limits on λ_e μ×λ_SM to compare with our results. As shown in Fig.<ref>, in comparison to current results and prospect MEG2 experiment, our ee channel simulation can give more stringent upper limits of coupling λ_μ e. Furthermore, as mentioned in Sec.<ref>. we also can set new limits on coupling λ_μμ×λ_μ e as shown in Fig.<ref>, which have never been obtained from experiments yet. § CONCLUSION With the continuous development of muon technology, in addition to building high-energy muon collider, there is also certain research prospect for the eμ interaction. In this work, we investigated the cLFV process propagated by massive neutral gauge bosons (Z') in eμ collision and electron-target experiment with a muon beam, in order to explore the potential of eμ interactions in new physics searches. We conduct the simulation studies on the cLFV processes μ ^+ e^- → e^+ e^-, μ ^+ e^- →μ^+ μ^- and μ ^+ e^- →μ^+ τ^-, using MadGraph5_aMC@NLO, Pythia8 and Delphes. Then we provide the coupling strength λ _eμ and λ _eτ at 90% C.L. of different M_Z'. By comparing the sensitivity results with current and prospective limits, it is shown that eμ interactions have certain research advantages in τ channel for the heavy Z'. Furthermore, through the direct interaction of leptons, a unique λ_eμ×λ_μ process can be measured, which is unmatched by most other experiments. This work is supported in part by the National Natural Science Foundation of China under Grant No. 12150005, 12075004, 12175321, 12061141003; the National Key Research and Development Program of China under Grant No. 2018YFA0403900; National College Students Innovation and Entrepreneurship Training Program, Sun Yat-sen University; State Key Laboratory of Nuclear Physics and Technology, Peking University under Grant No. NPT2020KFY04, NPT2020KFY05. JHEP
http://arxiv.org/abs/2405.09455v1
20240515155043
Efficient pooling designs and screening performance in group testing for two type defectives
[ "Hiroyasu Matsushima", "Yusuke Tajima", "Xiao-Nan Lu", "Masakazu Jimbo" ]
stat.CO
[ "stat.CO", "cs.IT", "math.IT" ]
Efficient pooling designs and screening performance in group testing for two type defectives This work was supported by JSPS KAKENHI Grant Number JP22K11943. 1st Hiroyasu Matsushima Data Science and AI Innovation Reseach Promotion Center Shiga University, Hikone, Japan 0000-0001-7301-1956 2nd Yusuke Tajima Data Science and AI Innovation Reseach Promotion Center Shiga University, Hikone, Japan yusuke-tajima@biwako.shiga-u.ac.jp 3rd Xiao-Nan Lu Department of Electrical, Electronic and Computer Engineering Gifu University, Gifu, Japan 0000-0001-7881-8505 4th Masakazu Jimbo Center for Training Professors in Statistics The Institute of Statistical Mathematics Tokyo, Japan jimbo@ism.ac.jp May 20, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Group testing is utilized in the case when we want to find a few defectives among large amount of items. Testing n items one by one requires n tests, but if the ratio of defectives is small, group testing is an efficient way to reduce the number of tests. Many research have been developed for group testing for a single type of defectives. In this paper, we consider the case where two types of defective A and B exist. For two types of defectives, we develop a belief propagation algorithm to compute marginal posterior probability of defectives. Furthermore, we construct several kinds of collections of pools in order to test for A and B. And by utilizing our belief propagation algorithm, we evaluate the performance of group testing by conducting simulations. group testing, pooling design, belief propagation § INTRODUCTION Group testing is utilized in the case when we want to find a few defectives among large amount of items. For example, group testing can be applied to PCR tests for finding defective specimens, water quality test for the presence of harmful substances contained in wastewater from multiple locations, etc. Testing n items one by one requires n tests, but the ratio of defective items is often small (0.0001 to 0.01). In such cases, a test can be performed on a mixed pool of multiple items, and if the result is negative, it can be determined in a single test that all items in the pool are negative. If the result is defective, one of the items in the pool is defective. By testing various combinations of pools, the marginal posterior probability that each item is defective can be computed from the test results of a much smaller number of pools than the total number of items. However, the false positive/negative (FP/FN) probability of each test must be taken into account when making a positive/negative decision. Research on group testing originates from the syphilis testing by Robert Dorfman <cit.>. Group testing are classified into two types: noiseless testing (no FN/FP error) and noisy testing. Also, they are classified into two types: adaptive testing, in which the next pool is determined based on the results of the previous test, and nonadaptive testing, in which all pools are tested at once. Especially in the case when a single test is time-consuming, a large number of pools can be tested at once to identify defective items (<cit.>, <cit.>). Algorithms such as Belief Propagation (BP) and MCMC are used for this purpose (<cit.><cit.><cit.>). In this paper, we consider the case where two types of defective A and B exist. If group testing is conducted separately for each of A and B, the number of tests is twice as many as for one type's test. However, as shown in Fig.<ref>, it is expected that we can reduce the number of tests by mixing pools that perform tests for each of A and B, and pools that perform `test against AB' (i.e., a test where the test result is defective if either A or B is defective). For two types of defectives, firstly we construct a belief propagation algorithm to compute marginal posterior probabilities of defectives. Secondly we construct several kinds of collections of pools for tests for A, B, and AB by utilizing the finite affine geometry. And by utilizing our belief propagation algorithm we evaluate the performance of group testing by conducting simulations. § GROUP TESTING INCLUDING TWO TYPES OF DEFECTIVES In the case of two types of defectives, we construct three kinds of pools: (i)test for A, (ii)test for B, and (iii)test which reacts either A or B. A set of various pools is called a pooling design. The combinatorial structure of a pooling design determines the efficiency of group testing. Let C={c_1, …, c_n} be a set of items. Let X_j^A=1 if item c_j is defective for type A and X_j^A=0 otherwise. Similarly X_j^B is defined. Also, X_j^AB=X_j^A ∨ X_j^B. A subset G={c_j_1, …, c_j_k} of C is called a pool. We simply write 1, …, n to identify the elements of C and their subscripts. And sometimes we write as C={1, …, n}, or, G={j_1, …, j_k}. Let ^A, ^B, and ^AB denote the sets of pools that test for A, B, and AB, respectively, and let Z_i^A=⋁_j ∈ G_i X_j^A for the pool G_i∈^A that tests against A. Let Z^A=(Z_i^A | G_i∈^A). Define Z^B and Z^AB in the same way. The observation S_i^A for a pool of G_i^A takes binary values 0, or 1, and let S^A=(S_i^A | i ∈ C). The same applies to S^B, S^AB. Sensitivity and specificity are defined by p(1|1) =(S_i^A=1 | Z_i^A=1)=(S_i^B=1 | Z_i^B=1) =(S_i^AB=1 | Z_i^AB=1), p(0|0) =(S_i^A=1 | Z_i^A=1)=(S_i^B=0 | Z_i^B=0) =(S_i^AB=0 | Z_i^AB=0). That is, the probabilities p(1|0) and p(0|1) of FP and FN are assumed to be constant regardless of A, B and AB. Let p_A, p_B be the defective rates for type A and B. The marginal posterior probability that each item c_j is positive/negative for A, B under the observations s^A, s^B, and s^AB is given by (X_j^A=a, X_j^B=b | s^A, s^B, s^AB) for each case of a,b =0, 1. Let X^A=(X_j^A | j ∈ C), X^B=(X_j^B | j ∈ C), and let X_-c^A=(X_c'^A | c' ∈ C∖{c}) be the vector of test results for items except for c. Also, the event that X_c^A=a for item c and that X_-c^A=x_-c^A for the rest are written by X^A=(x_-c^A, a). The similar notations are used for B. The marginal posterior probability can be written as follows: ( X_j^A=a, X_j^B=b | s^A, s^B, s^AB) = K ∑_x_-j^A, x_-j^B(X^A=(x_-c^A, a), X^B=(x_-c^B, b)) ×(S=s | X^A=(x_-c^A, a), X^B=(x_-c^B, b)), where S=(S^A,S^A,S^AB), and K=(S=s)^-1. This calculation involves 2^n-2 sums, and when n is large, the computational complexity is O(2^n). § BELIEF PROPAGATION ALGORITHM Pooling designs can be represented by three bipartite graphs (C, ^A, E^A), (C, ^B, E^B), (C, ^AB, E^AB) whose vertices are connected if a pool G contains a item c as shown in Fig. <ref>, where E^A, E^B, E^AB are the sets of edges of these bipartite graphs. And let E=E^A∪ E^B∪ E^AB. In order to calculate the marginal posterior probability, we develop the following algorithm based on belief propagation for screening two types of defectives. Step 1 (Initialization of Q): For each edge (c, G) ∈ E, let Q̅_cG^(0)(x, y) := (X_c^A=x)(X_c^B=y) (x, y) ∈{0, 1}^2. Let t:=1 and let ε > 0. Step 2 (Computation of R): When G∈^A, for edge (c, G)∈ E^A, and for y=0, 1, let R_Gc^(t) (0, y):= p(s_G | 1) + (p(s_G | 0) - p(s_G | 1)) ×∏_c' ∈ G∖{c}(Q̅_c'G^(t-1)(0, 0)+Q̅_c'G^(t-1)(0, 1)) R_Gc^(t) (1, y):=p(s_G | 1). When G∈^B, for edge (c, G)∈ E^B, and for x=0, 1, R_Gc^(t) (x, 0):= p(s_G | 1) + (p(s_G | 0) - p(s_G | 1)) ×∏_c' ∈ G∖{c}(Q̅_c'G^(t-1)(0, 0)+Q̅_c'G^(t-1)(1, 0)) R_Gc^(t) (x, 1):=p(s_G | 1). When G∈^AB, for edge (c, G)∈ E^AB, let R_Gc^(t) (0, 0):= p(s_G | 1) + (p(s_G | 0) - p(s_G | 1)) ∏_c' ∈ G∖{c}Q̅_c'G^(t-1)(0, 0), R_Gc^(t) (x, y):=p(s_G | 1), (x, y) (0, 0). Step 3 (Computation of Q): For each edge (c, G)∈ E, let Q _cG^(t)(x, y) := (X_c^A=x)(X_c^B=y) ∏_G' ∈ (c)∖{G}R̅_G'c^(t)(x, y). Let K_cG^(t):=∑_x, y Q_cG(t)(x, y), and normalize Q̅_cG(t)(x, y):=Q_cG(t)(x, y)/K_cG^(t). Step4 (Repeat for each t) If max_c∈ C|Q̅_cG^(t-1)(x, y) -Q̅_cG^(t)(x, y) | < ε, then go to Step 5, otherwise, let t:=t+1 and go to Step 2 Step 5 (computation of marginal posterior probability): For each c ∈ C, let Q_c (x, y) := (X_c^A=x, X_c^B=y) ×∏_G∈ (c)R̅_Gc^(t)(x, y), (x, y) ∈{0, 1}^2. Finally, let K_c:=∑_x, y Q_c(t)(x, y) and normalize Q̅_c(x, y):=Q_c(x, y)/K_c. In this study, we use the above BP algorithm. § POOLING DESIGN In group testing, the `goodness' of the pooling design also affects the screening efficiency in addition to the effective algorithm for screening. Combinatorial properties such as d̅-separable and d-disjunct have been studied in the case of noiseless testing with a single kind of defective. (see, for example, Du et al. <cit.>) When a single type defective is taking into account, for a set of pools ={G_1, …, G_m}, a pair (C, ) is called a pooling design. A pooling design is represented by an matrix M=(m_ij), where m_ij= 1 if item c_j is included in pool G_i 0 otherwise Each row of M corresponds to a pool and each column corresponds to an item. For an item c_j, T_j={i |m_ij=1 } is called the support of c_j. Let ={T_1, …, T_n} be the set of supports. Given a positive integer d, for any 0 ≤ d_i ≤ d (i=1, 2) and for any T_1, …, T_d_1∈ and T_1^', … T_d_2^'∈, if T_1 ∪⋯∪ T_d_1 T_1^'∪⋯∪ T_d_2^' holds, then the pooling design is said to be d̅-separable. In the case when we consider a single type defective and no FP/FN exist, that is, noiseless testing, if the pooling design is d̅-separable, it is known that defective items less than d can be accurately identified (see, for example, <cit.>). Also, given a positive integer d, for any distinct d supports T_1, …, T_d ∈ and T_0∈ such that T_0 T_i (i=1, … ,d ), if T_0 ⊄T_1 ∪⋯∪ T_d, then the pooling design(C, ) is said to be d-disjunct. It is known that if a pooling design is d-disjunct, then it is d̅-separable. For noisy testing with FP/FN, a pooling design with large d̅-separability is expected to identify or screen more defective items. A similar combinatorial structure is defined in the case of two types A and B. Let M_A, M_B, M_AB be matrices of the pooling designs corresponding to each of the pools ^A, ^B, ^AB that test for A, B, and AB, respectively. Let M_A=[ M_A; M_AB ], M_B=[ M_B; M_AB ]. Lu et al. <cit.> showed that if both M_A and M_B are (d-1)-disjunct and both M_A and M_B are d̅-separable then all defective items can be correctly identified when the total number of defective items of A and B is less than d. This property is called (2, d̅)-separable. In a combinatorial sense, large separability is desired for indentifying defectives. However, if the probability of non-separable structures in a pooling design is less, it may have more ability of identifying defectives than its designed separability. Under the usage of BP algorithm, we wish to investigate the relationship between the combinatorial structure of the pooling design and the identifiability or discriminability of defective items when two types of defective are included and FP/FN errors are present. § CONSTRUCTION OF POOLING DESIGN It is known that BP algorithms converge to the exact marginal posterior probability if there are no cycles in the bipartite graphs of a pooling design. However, in the case of pooling designs of group testing, we can not avoid cycles in the bipartite graphs in order to construct efficient designs. If there are short cycles in the bipartite graphs of a pooling design, a BP algorithm do not converges to the accurate values, which have some errors, and often it does not even converge. Hence, a pooling design not having short cycles in the bipartite graphs are desired. The shortest cycle is length four in a bipartite graph. We want to avoid cycles of length four in bipartite graphs (C, ^A, E^A), (C, ^B, E^B), (C, ^AB, E^AB). In other words, it is desired that for any two rows (pools) in each M_A, M_B, M_AB, there is at most one column (item) which have 1's in common. This property is called the unique collinearity condition (see Uehara et al. <cit.>). In adittion to the property of separability or disjunctness, it is required that a pooling design should satisfy the unique collinearity condition. A combinatorial design called `packing design' has this property. The following construction of a pooling design satisfies the unique collinearity condition. For a prime or a prime power q, let AG(3, q) be the 3-dimensional affine geometry over the finite field _q={f_0, …, f_q-1}. Each point of AG(3, q) is represented as (y_0, y_1, y_2) (y_i ∈_q) and AG(3, q) consists of q^3 points. Let P_i be a plane satisfying y_0=f_i (f_i ∈_q), each plane consists of q^2 points and P_i's (f_i ∈_q) form a parallel q planes. Let be the set of lines that intersect each P_i at exactly one point, and ||=q^4=n. Let be the set of points corresponding to pools and let C be the set of lines corresponding to items. Then the incidence (or adjacency) matrix M of the pools and items is determined. Partitioning q^3 points into q planes P_i (f_i ∈_q), q^2× n incidence submatrices M_i of points on P_i and lines are obtained. Then M consists of M_i's vertically aligned in M=[ M_0; ⋮; M_q-1 ]. There are q^2 1's in each row of M_i and exactly one 1 in each column. Let K ⊂_q and |K|=k. By piling up k (≤ q) M_i's (i ∈ K), we can make a kq^2× n incidence matrix M_K. Each column of M_K has k ones, thus it is (k-1)-disjunct. Assume that the probabilities of occurrence of the two type defectives A and B are equal, that is, p_A=p_B. Using M_i's in the above 3-dimensional affine geometry, we construct incidence (or adjacency) matrices M_A, M_B, M_AB as follows. For K ⊂_q, let M_A=M_B be a kq^2× n matrix with M_i's (i ∈ K) vertically aligned and M_AB be a (q-k)q^2× n matrix with M_i's (i ∈ K^c) vertically aligned. In our simulation, the M_A, M_B, M_AB constructed above are used. § SIMULATION RESULTS In our simulations FP and FN are fixed as p(1 | 0)=0.01 and p(0 | 1)= 0.03, respectively. And assume that p_A=p_B=0.002. We use pooling designs generated by AG(3, 7) by setting q=7. In a simulation, each item is tested k times, k times, and 7-k times, respectively for A, B, and AB. The simulations are executed for k=1, 2, …, 6 by utilizing corresponding matrices M_A, M_B, M_AB. We use distinct M_i's between M_A and M_AB, also, between M_B and M_AB. However, they may not distinct between M_A and M_B since from the separability point of view it is allowed that M_A=M_B. Each simulation is repeated 1000 times. Table <ref> shows the worst rank that all true defectives are included with probabilities 95% and 99% when the marginal posterior probabilities of each item being defective are sorted in descending order. In the table, note that the first row shows the number of defectives for each of A and B. Among the simulations, design (3) reveals the most effective results. Even if the number of defectives are 10 for each of A and B, the screening or identification powers are still high. On the other hand designs (1), (5), (6) has lower identifiability. In the case of (1), the number of individual tests for each item is one in M_A and M_B, respectively. It means that more replication is required for the separability property. Designs (5) and (6) has low screening power for probability 99% in the case of 8 or 10 defectives. In these cases low screening power may due to the short cycles between M_A and M_B. From the separability point of view, it is allowed that M_A and M_B are identical, but when a BP algorithm is adopted it is required that M_A and M_B are not be identical. § CONCLUSION This paper address to the case where two types of defective A and B exist in group testing. In order to screening two types of defectives, we developed a belief propagation algorithm to compute marginal posterior probability of defectives. By utilizing the finite affine geometry, we construct several kinds of collections of pools for test A and B. And simulation is conducted to evaluate performance of group testing by using our belief propagation algorithm. Through simulation experiments on the adopted pooling design, we suggest the follows; proposed BP algorithm shows high screening performance for two types of defectives when the number of each defectives is about 8. The identification power of the proposed BP algorithm decreases when there is a cycle structure on the adopted pooling design. Future work is needed to improve the pooling design and the BP algorithm. IEEEtran
http://arxiv.org/abs/2405.09978v1
20240516104504
Pedestrian evacuations with imitation of cooperative behavior
[ "Amir Zablotsky", "Marcelo N Kuperman", "Sebastián Bouzat" ]
physics.soc-ph
[ "physics.soc-ph", "nlin.AO" ]
]Pedestrian evacuations with imitation of cooperative behavior Present address: Université Grenoble Alpes, CNRS, Laboratoire Interdisciplinaire de Physique, 38000 Grenoble, France. Instituto Balseiro, Bustillo 9500, (8400) Bariloche, Argentina. Instituto Balseiro, Bustillo 9500, (8400) Bariloche, Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas, Argentina Gerencia de Física, Centro Atómico Bariloche (CNEA), (8400) Bariloche, Argentina. email: bouzat@cab.cnea.gov.ar. Consejo Nacional de Investigaciones Científicas y Técnicas, Argentina Gerencia de Física, Centro Atómico Bariloche (CNEA), (8400) Bariloche, Argentina. We analyze the dynamics of room evacuation for mixed populations that include both competitive and cooperative individuals through numerical simulations using the social force model. Cooperative agents represent well-trained individuals who know how to behave in order to reduce risks within high-density crowds. We con- sider that competitive agents can imitate cooperative behavior when they are in close proximity to cooperators. We study the effects of the imitation of cooperative behavior on the duration and safety of evacuations, analyzing evacuation time and other quantities of interest for varying parameters such as the proportions of mixing, the aspect ratio of the room, and the parameters characterizing individual behaviors. Our main findings reveal that the addition of a relatively small number of cooperative agents into a crowd can reduce evacuation time and the density near the exit door, making the evacuation faster and safer despite an increase in the total number of agents. In particular, for long spaces such as corridors, a small number of added cooperative agents can significantly facilitate the evacuation process. We compare our results with those of systems without imitation and also study the general role of cooperation, providing further analysis for homogeneous populations. Our main conclusions emphasize the potential relevance of training people how to behave in high-density crowds [ Sebastián Bouzat May 20, 2024 ==================== § INTRODUCTION The study of crowd dynamics has recently gainered considerable attention, both in academic circles and practical applications. The challenge lies in understanding a complex system that requires the integration of physics and computational techniques, as well as social science principles related to decision-making processes and individual and collective behavior. Several studies <cit.> have investigated this area from a fundamental science perspective. Meanwhile, in terms of practical applications, mathematical models of pedestrian dynamics and simulation software have become essential for evaluating safety conditions in building and public facility design <cit.>. This is important not only for emergency situations, where preventing fatalities is paramount, but also for regular pedestrian flow and individual comfort. One of the most critical issues in crowd dynamics is the evacuation of enclosures through small exits or bottlenecks, which poses a significant risk to people and has been responsible for several tragic events. To understand the fundamental aspects of evacuations, numerous controlled experiments have been conducted <cit.>. However, due to the inherent difficulties and risks associated with experiments, mathematical models are crucial in unveiling the dynamics of evacuation processes. Numerical simulations enable researchers to analyze a wide range of scenarios and conditions. Various modeling frameworks have been employed to analyze pedestrian dynamics, including fluid-dynamics models <cit.>, cellular automata <cit.>, agent-based models <cit.>, and particle-like descriptions of pedestrians, such as the social force model <cit.>. These models are vital in gaining a deeper understanding of evacuation dynamics and devising strategies to minimize the risks involved. Models that assume the same behavior for all the individuals in a crowd can provide useful information for understanding the generalities of pedestrian motion and evacuations. However, an attempt to describe the collective dynamics more precisely requires the consideration of the heterogeneity of people's behavior, as this can lead to significant deviations from the results obtained in homogeneous models <cit.>. A simple way of modeling heterogeneous populations is to reduce the wide variety of possible behaviors to only two categories <cit.>. One category corresponds to individuals that can be refered to as cooperative <cit.> or patient <cit.>, for instance, while the other category corresponds to pededestrians that may be called competitive, selfish or impatient. In general, a cooperative agent represents an individual who, despite being focused on leaving the enclosure, is aware of the presence of others, keeps calm, is not aggressive, and does not rush. In contrast, a competitive agent tends to move faster than normal and rush for empty spaces. It is important to note that different degrees of cooperativeness and competitiveness can be considered, depending on the specific scenario. For instance, competitive agents could simply correspond to hurried or disrespectful pedestrians in non-emergency situations, or they could represent people dominated by panic who push and run over others in an emergency. On the other hand, several studies have modeled the attitudes of the pedestrians considering a continuous variable instead of only two possible states, usually regarding panic contagion, both for cellular automata <cit.> and particle-like <cit.> descriptions. The phenomenon of panic emergence and contagion within congested crowds is considered as one of the main causes of targedies in pedestrian evacuations <cit.>. Panic tends to increase the velocity of pedestrians and can produce areas of high pressure leading to falls, injuries and fatalities. From the point of view of modeling, panic contagion can be thought as the propagation of competitive behavior and has been the subject of many studies (see for instance <cit.> and references therein). On the other hand, some studies have analyzed the role of cooperative behaviors in evacuations <cit.> with limited focus o propagation. As general trends, several studies suggest that while highly competitive behaviors tend to increase the risks and duration of evacuations, cooperative or patient attitudes lead to the contrary effects. In particular, it is a well established fact that cooperative attitudes in homogeneous populations can reduce the evacuation time and risks <cit.>. In this context, we use numerical simulations to investigate how the propagation of cooperative attitudes, if achieved, may facilitate the evacuation through a narrow exit of a population whose pedestrians are in a considerable hurry to exit. It is worth noting that, while the spreading of panic (or of a selfish attitude) is a natural and explosive process related to the instinctive response of escaping from danger, the propagation of cooperative attitudes (i.e. the keeping of calm and the imitation of gentle and patient attitudes) can involve rather anti-instinctive actions that may require training and awareness. Hence, when assuming that cooperative attitudes propagate, we will be implicitly assuming that the pedestrians have a significant (although not homogeneous) degree of education concerning the benefits of keeping calm and behaving in prescribed forms. As we will show, this assumption leads to notable advantages for the evacuation process in the simulations. Our ultimate goal is to highlight the relevance that an appropriate education on how to behave within high-density crowds may have. In our studies we consider the social force model to simulate the evacuations. In contrast to many agent based or cellular-automata models, the social force model explicitly includes the contact (mechanical) force between pedestrians, which is a desirable ingredient regarding the clogging dynamics through narrow exits. We consider a population of competitive pedestrians, to which we add a variable number of what we call cooperative agents. The competitive agents represent hurried individuals with parameters that may correspond to a situation close to the onset of panic <cit.>. However, we assume that they have a minimal degree of education or awareness, making them susceptible of being calmed, at least temporally, by the cooperative agents. The latter, on the other hand, represent better-trained pedestrians with a deeper degree of understanding of the relevance of remaining calm within congested crowds and with some capacity to influence their neighbors. Then, we assume that the competitive agents turn to adopt a cooperative behavior when they are close enough to a cooperative agent. In different scenarios, the cooperative agents are assumed to have a more patient or, alternatively, a more cautious attitude than competitive ones, as we will explain later. We analyze various quantities that characterize the evacuation process and its safety as functions of the number of added agents and of other relevant parameters. As we will show, depending on the type of scenario and conditions, even a limited spread of cooperativeness and a relatively small number of cooperative agents can lead to a considerable easing of the evacuation and to an increase in safety. It is worth remarking that our studies provide a numerical assessment of what could happen in populations with a degree of education that may need to be higher that the one we have in most societies nowadays, particularly in terms of imitation of cooperation. While there is evidence of the emergence and propagation of cooperative behaviors in emergency situations (see <cit.> and references therein), our simulations highlight the potential impact of education and awareness in promoting cooperative behaviors during pedestrian evacuations. Particularly, if such an education could enable rational (even anti-instinctive) individual decisions to prevail over instinctive reactions known to cause catastrophic events. The results could eventually be contrasted to controlled experiments with actual pedestrians that should follow the line of the experiments in Ref. <cit.> but including protocols for imitation. Further interdisciplinary studies are needed before suggesting concrete decisions on education. Beyond our main objective, the present work provides new insights and results concerning the dynamics of the social force model, both for inhomogeneous and homogeneous populations, that may be of interest for basic studies and applications of the model as well. The paper is organized as follows. In Section 2, we introduce the model for pedestrian dynamics and the rules for imitation. We also describe the quantities used to analyze the simulations. In Section 3, we present the results. We first discuss the evacuation of homogeneous populations, then the evacuation of mixed populations in rooms with variable geometries. After that, we consider mixed populations that combine different types of cooperative agents, and finally, for the sake of completeness, we analyze systems without imitation. In Section 4, we summarize our conclusions. § MODELS AND SIMULATIONS §.§ The Social force model We analyze the evacuation of a rectangular room with sides of length L and H, where L≤ H. The only exit has a size of D and is located at the center of one of the sides with length L (see diagram in Fig.<ref>.a). Unless indicated otherwise, we fix D=1m, which corresponds to two diameters of a pedestrian. Meanwhile, L and H are varied in different simulations. To simulate the evacuation, we employ a standard version of the social force model <cit.> with radially-symmetric repulsive social forces. The dynamical equations for a population of N pedestrians with positions r⃗_i, where i=1… N, are given by dr⃗_i/dt = v⃗_i m_idv⃗_i/dt = F⃗^d_i+∑_j≠ iF⃗^s_ij+∑_j≠ iF⃗^f_ij. In this context, v⃗_i represents the velocity of pedestrian i, and m_i denotes their mass, which is set as 70 kg for all pedestrians in this study. Equation (<ref>) indicates that the motion of each pedestrian is influenced by three forces. First, F⃗^d_i=m_i(v_d,i ê_i-v⃗_i)/τ_i is the self-propulsion or desire force of agent i, which is proportional to the difference between the actual velocity v⃗_i and the desired velocity v_d,iê_i. Here, v_d,i is the desired speed and ê_i is the unit vector indicating the desired direction of motion which is toward the nearest point within the exit. The parameter τ_i is the typical relaxation time that takes the agent to reach a fixed desired velocity in the absence of other forces. In this work, we set τ_i=0.5s for all i <cit.>. The second force term in Eq. (<ref>) corresponds to the repulsion felt by agent i due to the presence of agent j, given by F⃗^s_ij=A_ie^(R_i+R_j)-‖r⃗_i-r⃗_j‖/B_in̂_ij. Here, R_i is the radius of agent i and n̂_ij is the unit vector pointing from r⃗_j to r⃗_i. The parameter A_i defines the repulsion felt by agent i when touching agent j (i.e., when they are separated by a distance R_i+R_j), while B_i defines the exponential decay of the force with the distance. For distances larger than R_i+R_j, F⃗^s_ij represents the social force, while when the two agents overlap it also accounts for the mechanical body force. The definition given in Eq.(<ref>), that considers a single continuous function for describing both the social force and the physical force in the normal direction corresponds to the assumption used in <cit.>. Other versions of the social force model include a separate description of the physical force that may be relevant to reproduce particular experiments in detail. However, as the general phenomenology observed in the simulations of both types of models is essentially the same and our studies do not involve fine tuning of the parameters, here we choose the simplest version. The last term in Eq. (<ref>) corresponds to the tangential friction force exerted by agent j on the agent i: F⃗^f_ij=κ Θ([R_i+R_j]-‖r⃗_i-r⃗_j‖)([R_i+R_j]-‖r⃗_i-r⃗_j‖)(Δv⃗_ij·t̂_ij) t̂_ij. As Θ represents the Heaviside function, this force acts only if the two agents are overlapped. It has the direction of the tangent versor t̂_ij=(-n_ij^(2),n_ij^(1)) and is proportional to the overlap and also to the tangent component of the difference of velocities Δv⃗_ij=v⃗_j-v⃗_i and to the friction constant κ <cit.>. Following the proposals of previous works <cit.>, we set κ =2,4×10^5 kg m^-1 s^-1 and we consider B_i=0.08m and R_i=0.25m for all i <cit.>. In Fig. <ref>. we show a diagram of the room and of the forces acting on an agent marked as cooperative. Both the social force F⃗^s and the friction force F⃗^f in Ec. 1 consider not only the interaction between pedestrians, but also the interaction between the pedestrians and the walls. For agent i, the social force felt as a result of its proximity to each wall is described by Eq. 3, where R_j=0m and r⃗_j denotes the point of the wall closest to agent i <cit.>. Similarly, the friction force that the walls exert on agent i is given by Eq. 4, where R_j=0m, v⃗_j=0m/s and r⃗_j is the point of the wall closest to agent i <cit.>. Note that, according to Eq. <ref> with the standard parameters considered <cit.>, the initial acceleration of an isolated pedestrian starting from rest may be significant high, of the order of the accelerations attained by high-performance athletes. However, this may only occur for a very short time window (∼ 0.2 s) after the initial condition of the simulation of a low density system. In any other situation during simulations of evacuations, agents either move with non vanishing velocities (of the order of v_d) or are not entirely isolated, thereby being subject to social and contact forces that decrease their accelerations. The simple linear model for the desired force is primarily intended to describe the relaxation of the velocity to the desired one for an agent within a crowd, rather than to reproduce in detail the whole process of acceleration of an isolated pedestrian. The dynamical equations are solved using the velocity Verlet algorithm with a time-step of 1ms <cit.>. The initial positions of the agents, regardless of their type, are set at random within the room, taking care of leaving at least 0.5m (one pedestrian's width) of free space between any two agents and between agents and walls. For each parameter set considered we perform 50 realizations of the evacuation dynamics with different random initial conditions. §.§ Mixed populations and imitation of cooperative behavior As stated in the introduction, our study will investigate the evacuation dynamics of mixed populations composed of cooperative and competitive agents, the latter also referred to as egotistic agents. In particular, we will explore the possibility of imitation of cooperative behavior by competitive agents. We will denote the number of egotistic agents as N_e, and the number of cooperative agents as N_c. In the various examples that we analyze, we consider two different types or versions of cooperative agents. The first type corresponds to cooperative agents defined by considering a desired velocity (v_d,c) smaller than the one used for egotistic agents (v_d,e). Cooperation in this case can be associated with a patient attitude. Note that a smaller value of desired velocity corresponds to a less hurried attitude, that would contribute to ordering the flux and decreasing the effective pressure. According to Eq. <ref>, in a situation of clogging in which the actual velocities of the agents are slower than the desired ones, the competitive pedestrians would tend to accelerate faster than the patient cooperative ones, and they would also exert a stronger pressure on the agents that are ahead of them. The second type of cooperative agents considered are referred to as cautious agents. These are defined by considering a value of the parameter A_i larger than the one used for competitive agents. Hence, in this case we set A_c>A_e while the remaining parameters are the same for both agents types. Note that the parameter A_i is used for defining the social force in Eq. <ref>. A larger value of A_i implies that the agent tries to maintain a greater distance from their neighbors, thereby avoiding physical contact and leaving free space for other agents to move. The dynamics of imitation assumes that competitive agents turn to adopt cooperative behavior if they are close enough to cooperative agents. In most of our studies, we simulate mixed populations with two types of agents. Namely, the competitive pedestrians and only one type of cooperative agents (either patient or cautious). In these cases, the implementation of the imitation dynamics is as follows: a critical distance for imitation denoted as r_c is introduced. If the distance between a competitive pedestrian and a cooperative agent is smaller than r_c, the competitive agent adopts the parameters of the cooperative agent. However, if the distance becomes larger than r_c, the competitive agent reverts back to behaving as competitive. Therefore, imitation is not permanent, but rather instantaneously conditioned by the proximity to a cooperative agent. Moreover, only the original cooperative agents can be imitated, as competitive agents who are temporarily behaving cooperatively cannot be imitated. The imitation dynamics is the same for the cases in which we consider patient or cautious cooperative agents. In the first case, the competitive pedestrians change their value of v_d when imitating cooperators, while in the second case they change A_i. At the end of the work we simulate evacuations of mixed populations with three types of agents, i.e. the two types of cooperative agents in addition to the competitive ones. In this case, the dynamics of imitations considers the same imitation radius r_c, and is defined through a majority game algorithm that will be explained later. The radius for imitation is set as r_c=1m throughout the work, as it is reasonable to expect that pedestrians within a distance of approximately 1m may communicate through speaking or physical contact. For example, cooperative agents could try to calm other pedestrians by speaking, giving instructions or touching their shoulders. This parameter is kept fixed throughout our study, as the results are expected to be robust to small changes in r_c. We also investigate the limiting case where r_c=0m, which corresponds to evacuations without imitation of cooperative behavior. The other limiting case, r_c=max{L,H}, would result in a fully-cooperative crowd with homogeneous behavior. In the diagram of Fig. <ref>.a we illustrate the three types of agents and the distance for imitation. Fig. <ref>.b shows an instantaneous state of a simulation of a mixed population with competitive agents, cooperative agents, and imitators. Meanwhile, Table <ref> summarizes the model parameters used along the work. As mentioned in the introduction, the most relevant interpretation of the model of imitation is that the cooperative agents represent pedestrians with a higher degree of education and training on how to behave and calm other pedestrians compared to the competitive ones. The competitive pedestrians, on the other hand, represent hurried individuals which may be at the onset of panic <cit.>, who have however some level of knowledge about the benefits of remaining calm and are thus susceptible to being influenced, at least temporarily, by the cooperative agents. By considering that only the originaly cooperative agents can be imitated we are implicitly assuming that competitive agents can be temporaly induced to remain calm or not to hurry but they cannot be induced to be able to calm other people. This can be compatible either with a lower degree of education, a lower conviction on the relevance of remaining calm, a lower leadership attitude or a higher degree of fear compared to cooperators. Note that competitive pedestrians are also unable to imitate the permanent temperance of the cooperative agents, as they begin to behave in a selfish way as long as they get far enough from cooperative agents. On the other hand, an alternative interpretation of the model is that cooperative pedestrians may represent security agents wearing recognizable uniforms instead of representing ordinary individuals of the population with a deeper understanding of the convenient behavior. Thus, they are clearly able to influence the behavior of their neighbors. It is important to note that, regardless of the interpretation considered for the cooperators, imitators are not the same as cooperators, and the conceptual significance of the model lies in the fact that the cooperative agents induce cooperativity in the competitive population, but the propagation is limited, as imitation is not permanent and imitators cannot be imitated. Our assumptions aim to demonstrate how a relatively modest degree of propagation of cooperativity can significantly improve evacuations. If these conditions were relaxed, for instance if competitive agents imitating cooperative behavior could also be imitated, then cooperativity would spread rapidly throughout the population, and most agents would start behaving like the cooperative agents. In such a case, the effects resulting from the propagation of cooperative behavior would be enhanced. It is worth noting that there are some differences between our assupmtions for the cooperative agents with those concerning leader agents found in the literature. It is commonly assumed that leader agents possess superior knowledge of the room's geometry, including the location of the exit, or have a heightened capacity for exploration, as evidenced in studies like <cit.>. In contrast, our cooperative agents do not have any additional spatial information compared to the competitive agents. Instead, they possess a deeper understanding of the benefits of remaining calm, moving slowly, and reassuring others during evacuation. On the other hand, the competitive agents are equally aware of the exit's position, but may lack the understanding of the advantages of patience and cautiousness demonstrated by the cooperative agents. §.§ Quantities of interest Here we introduce the quantities of interest that we calculate along our work for the different sets of simulations. As stated in the introduction, we are mainly interested in analyzing how the presence of cooperative agents and the propagation of the cooperative behavior affect the duration and safety of the evacuations. The duration of the evacuations can be directly quantified by the evacuation time, meanwhile, we will characterize the safety of the evacuation by measuring the density around the exit as we explain below. Other relevant metrics to compute are the survival function, that characterizes the intervals between escapes, and the fundamental diagram relating density with velocities. For each parameter set considered, we perform 50 realizations of the dynamics with different random initial positions of the agents. Then, for each realization of the dynamics, the evacuation time T is defined as the time it takes for 80% of the pedestrians to leave the room <cit.>. In our analysis, we focus on the median of T computed over the 50 realizations for each parameter set considered, and we also calculate the first and third quartiles to characterize the width of the distribution of T. For the sake of shortness, throughout the paper, the evacuation time T (sometimes also called exit time) refers to the median of T computed over 50 realizations. Another quantity of interest is the mean density around the door as a function of time, referred to as ρ(t). For each time t, ρ(t) is calculated as an average over realizations of the number of pedestrians located in a semicircle of radius 1 m around the center of the door, divided by the area of such semicircle. This is a measure of the crowding around the door and an indirect indication of the pressure felt by the pedestrians in that very zone. Relative small values of ρ(t) characterize safe evacuations while large values of ρ(t) indicate risks of injuries. We also analyze the survival function for the times between successive escapes denoted as P(Δ t > τ). This function, which depends on the variable τ, gives the probability that the time (Δ t) between the escapes of two pedestrians who leave the room successively is greater than τ <cit.>. For each set of parameters, the survival function is computed from all the Δ t that occurred during the 50 realizations of the evacuation. Finally, we perform the calculation of the fundamental diagram <cit.>, which illustrates the relationship between the average velocity of the agents in the exit zone and the corresponding density. To obtain this diagram, we employ the same region surrounding the door that was used to compute ρ(t). At every 100 simulation steps, we determine the mean speed and density of the agents within the area, and then we average the results across multiple realizations. As previously stated, we compute the local density in the region as the number of agents inside it divided by its surface area <cit.>. § RESULTS Our plan for the presentation of the results and analysis is as follows. First, in the next subsection we analyze the case of homogeneous populations. This is in order to fix some ideas on the role of the parameters A and v_d that we later use to define cooperative and competitive behaviors, and also to analyze the dependence on the geometry of the room. Then, we present our main studies of evacuations of mixed populations with imitation of cooperative behavior. We begin by analyzing the case of square rooms and, after that, we consider the case of long rooms. At the end of the section we present results for mixed systems without imitation, for the sake of comparison and completeness. §.§ Homogeneous populations: the faster-is-slower effect. We begin by analyzing the case of a homogeneous crowd in a rectangular room. Although the general phenomenology for this system is known <cit.>, here we present further analysis of the dependence on the parameters and highlight certain features as a starting point for our studies. In Figure <ref>.a, we show the dependence of the evacuation time on the desired velocity of the agents. The typical curve exhibiting the faster-is-slower effect is obtained <cit.>. Three zones can be distinguished. First, for v_d≲ 1, we have the faster-is-faster zone in which the evacuation time decreases with the desired velocity. This corresponds to relatively ordered evacuation processes with no relevant clogging events. Here, the social force is stronger than the desire force. On the other hand, for 1≲ v_d ≲ 4.5, we have the faster-is-slower zone in which the evacuation time grows with v_d due to the increase of clogging events that produce an intermittent flux <cit.>. This is a consequence of the fact that the desire force is stronger than the social force. Finally, for v_d >4.5 the evacuation time decreases with v_d again, but this is a region of high pressure for which wounded and fallen pedestrians may difficult the evacuation, so additional modeling may be needed <cit.>. If, as stated in the previous section, we associate relatively small values of v_d with cooperative behaviors and large values of v_d with competitive or egotistic ones, the curve in Fig.<ref>.a indicates that neither fully egotistic nor extremely cooperative attitudes are optimal for easing the evacuation in homogeneous systems. In contrast, intermediate behaviors provide lower evacuation times, as discussed in <cit.>. The same can be said about the parameter A. As mentioned before, large values of A can be associated with cooperative (cautious) behaviors while small values with selfish (incautious) ones, while Fig. <ref>.b shows that there is an intermediate value of this parameter that optimizes the evacuation process. The increase of the evacuation time for decreasing A at small values of A has thus a strong analogy with the faster-is-slower effect and could be named as incautious-is-slower. It is caused by the increase of clogging due to the reduction of the repulsion among agents. In general, highly competitive behaviors enhance clogging while extreme cooperative attitudes produce rather ordered but slow fluxes <cit.>. Figure <ref>. c shows a contour plot of the evacuation time as a function of v_d and A. The shape of the contours indicates that, in rather general situations, if one of the parameters is fixed, there is an optimal (non extreme) value of the other that minimizes the evacuation time. The red horizontal and vertical segments indicate the parameters scanned while plotting Figs. <ref>.a and <ref>.b, respectively. Figure <ref>.d shows the exit time as a function of v_d considering different door widths for the same parameters as in Fig. <ref>.a. It can be seen that the positive slope of the curve in the faster-is-slower zone decreases with D until the effect disappears. However, the region of v_d for which the faster-is-slower effect occurs grows slightly with D. Throughout this work, we focus on the case D=4 R with R=0.25m; nevertheless, the results in Fig. <ref>.d suggest that our main conclusions may be robust to relatively smooth changes of these assumptions provided the faster-is-slower effect exists. In Fig.<ref>.e, we show the exit time as a function of v_d for varying values of the room width (L), while keeping its area and the initial number of agents constant. As L decreases, the curve shifts to the right, indicating that ordered fluxes are obtained at larger velocities and higher pedestrian speeds are needed to observe the faster-is-slower regime. This is essentially due to two related effects. First, for small L, the rooms are narrow but large, so pedestrians coming from the back of the room require a large time to reach the door and contribute to the clogging around the exit. Second, the flux in narrow rooms is better directed towards the exit, and the angles of the collisions between agents may tend to be smaller. We will further discuss the role of the geometry of the room when analyzing the flux of mixed populations. §.§ Results for mixed populations with imitation of cooperative behavior Here we analyze the dynamics of mixed populations in which competitive agents adopt cooperative behavior when they are close enough to a cooperative agent, as explained in the model section. We consider a crowd consisting of N_e=250 competitive pedestrians and a variable number N_c of cooperative agents. The main objective is to investigate if the addition of cooperative agents to a competitive crowd can reduce the exit time of the original crowd. As mentioned, cooperative and egotistic agents will be distinguished by their values of the parameters v_d or A, which will be referred to as v_d,c (or A_c) for cooperative agents and v_d,e (or A_e) for egotistic agents. We begin by analyzing the case of patient cooperative pedestrians (v_d,c<v_d,e, with A_c=A_e). We choose v_d,e=3m/s, so that competitive pedestrians represent hurried people which are behaving as in the faster-is-slower regime described in Fig.<ref>.a. Meanwhile, we consider different values v_d,c to scan the region v_d,c<v_d,e form the faster-is-slower regime to the faster-is-faster one. In Figure <ref>, we show results for the evacuation time for the mixed system with imitation. Each panel corresponds to a particular value of the parameter v_d,c and exhibits three curves for T as a function of the number of added agents N_a. First, in orange, the curve for the mixed population of interest with fixed N_e=250 and varying N_c=N_a. Second, for the sake of comparison, we show the curve for the pure system of competitive agents with varying N_e=250+N_a (with N_c=0), and third, the curve for the pure system of cooperative agents with N_c=250+N_a (N_e=0). In addition, the horizontal dashed line indicates the exit time obtained for a population of N_e=250 with N_c=0. Note that, while the blue curve for pure competitive agents is the same in all the panels, the green curve for cooperative populations sinks with decreasing v_d,c from panels a to c, and then rises again in panel d for the smallest value of v_d,c analyzed. This could be expected taking into account the profile of the curve for the exit time vs. v_d shown in Fig. <ref>.a. To analyze the results for the mixed system, we begin by focusing on Fig. <ref>.a which considers the largest value of v_d,c studied, so that the difference between cooperative and egotistic agents is the smallest one. Starting from N_c=N_a=0, as the number of added cooperative agents grows, the evacuation time for the mixed system decreases to values smaller than that for to the pure competitive population of N_e=250 (horizontal dashed line). Then, for N_a∼ 50 it begins to grow and surpasses the N_a=0 level. At large N_a the curve for the mixed system approaches that for the pure cooperative population. This latter fact could be expected since a large concentration of cooperators would make all the competitive agents to behave as cooperative ones. Hence, it is worth remarking that, with imitation, a population of N_e=250 with a number N_c=N_a≲ 50 of added cooperative agents is evacuated faster than the pure competitive population with N_e=250 and N_c=0. The addition of cooperative agents reduces the evacuation time despite the increase in the total number of pedestrians. The maximal reduction obtained in the results of Fig. <ref>.a is of the order of 7% at N_a∼ 30. The effect is even more notable for smaller values of v_d,c as those considered in Fig. <ref>.b and Fig. <ref>.c, where we observe maximal reductions of the order of 17% and 23% (indicated with vertical arrows), respectively. As an example of a simulation with the parameters as in Fig.<ref>.c considering N_c=75 see video 1 in <cit.>. Figure <ref>.a shows the time dependence of the number of escaped agents for the systems studied in Fig. <ref>.b for the case N_a=65 (in the region of maximal reduction). It can be seen that, after a short transient, the slope of the curve corresponding to the mixed system, i.e. the outflow flux, becomes considerably larger than that for the pure competitive one. This enhancement of the flux is naturally related to a decrease in the times between successive escapes, as shown in Fig. <ref>.b where we plot the survival functions for the same systems. Clearly, the presence of cooperative pedestrians reduces clogging. The short transient regime before mentioned is related to the formation of a semicircular bulk of high density of pedestrians around the door (see Fig. <ref>.b and video 1 in <cit.>). Theeffects of clogging become relevant only once such a structure is established. The flat part of the curves in Fig. <ref>.b observed for τ≲ 0.1 corresponds to free flow of pedestrians while the decreasing regime corresponds to clogs. It is worth noting that the survival functions for the pure egoistic systems with N_e=250 and N_e=315 are nearly the same, revealing the fact that the clogging dynamics is independent of the number of pedestrians for large enough N. This resembles the implication of the Beverloo law in vertical arrangements of granular media <cit.>. In Figs. <ref>.c and <ref>.d, we show the fundamental diagrams <cit.> for particular systems taken from Figures <ref>.b and <ref>.c, respectively. The values of N_c for mixed systems were chosen close to those producing the maximal reduction of T in each case. For densities in the range of 2≲ρ≲ 5, the mixed systems show a considerable enhancement of the mean velocities compared to the pure competitive populations. The gains are larger for the system with the smallest value of the desired velocity of the cooperators considered (panel d). We can clearly link the enhancement of the velocity in the exit zone caused by mixing with the previously found decrease of clogging (panel b), augmentation of flux (panel a), and decrease of evacuation time (Fig. <ref>). Interestingly, the results for mean velocity vs. density for the pure competitive systems are nearly independent of N_e. This supports what was said in connection with the Beverloo law. The behavior of the curves for the smallest value of v_d,c studied (Fig. <ref>.d) is noteworthy. In this case, for large enough N_a, the curve for the mixed system is below that for the cooperative populations. This means that the evacuation time for the mixed system is not only smaller than for the pure competitive system with N_e=250, but also smaller than for the cooperative population with N_c=250+N_a. This is mainly because, for such slow cooperative agents, the initial transient regime is considerably larger for the pure cooperative system than for the mixed one. This effect can be observed in Fig. <ref>, where the evolution of the density of pedestrians around the door ρ(t) is compared for a pure cooperative system and a mixed system with the same total number of agents. The egotistic pedestrians reach the door much faster than the cooperative ones, and the high-density regime is formed more quickly in the mixed system than in the purely cooperative system. Meanwhile, the dynamics of both high-density regimes are rather similar since a large fraction of egotistic agents behave cooperatively in the mixed system. From another point of view, it is interesting to note that when comparing Fig. <ref>.c with Fig. <ref>.d, the shift of the curve for the pure cooperative populations is much larger than that of the curve for mixed systems. Clearly, the decrease in v_d,c affects a pure cooperative system more strongly than a mixed system with the same total number of agents, as could be expected. Despite the particular ordering of the curves in Fig. <ref>.d, at very large N_a, all the egotistic agents are expected to imitate the cooperative behavior and the curve of the mixed system would approach the pure cooperative one from below. In Fig. <ref>, the evolution of the density of pedestrians near the exit for mixed populations is shown. The value of the parameter v_d,c diminishes from Fig. <ref>.a to <ref>.d, in correspondence with the values considered in the panels of Fig. <ref>. Each panel analyzes various values of N_c, including the case of N_c=0. Let's focus on Figure <ref>.a, which considers the largest value of v_d,c studied. The results allow us to observe the change in the duration of the evacuation as N_c is varied, which is in agreement with the results in Fig. <ref>.a. Moreover, we can see that the addition of cooperative agents to the competitive population produces a decrease in the values of density observed along the evacuation. This decrease is on the order of 10% for the peak observed at the beginning of the evolution (t ∼ 10s). The effect is more notable for smaller values of v_d,c (panels b, c, and d), where we see a progressive splitting of the curves for the different values of N_c, and a reduction of the density on the order of 25% or even larger in some cases. This indicates that the addition of cooperative pedestrians with imitation not only speeds up the evacuation but also causes a decrease in the density near the exit, meaning a smaller pressure for the pedestrians and a lowering of the risk of accidents. We now turn to the case of cooperative agents defined through the value of A, i.e. what we have called cautious cooperative agents. We consider A_c > A_e with v_d,c = v_d,e. In Fig. <ref>.a, we show the results for the exit time, which are analogous to those obtained in Figs. <ref>.a - <ref>.c. The reduction of the exit time for the mixed system with respect to the pure competitive system with N_e=250 is even larger (approximately 57%). However, here we are considering a relatively large value of A_c (A_c = 6000N= 3 A_e). For smaller values of A_c in the range of 2500 to 5000, the reduction of the exit time is of the same order of magnitude as those found in Fig. <ref> (results not shown). The particularly large value of A_c used in Fig. <ref>.a was selected in the search for behavior similar to that found in Fig. <ref>.d, where the curve for the mixed system surpasses that for the pure cooperative one. However, such behavior was not found even for values of A_c as large as 10000N. This supports the argument given before, which suggests that this particular phenomenon was essentially due to the small value of the velocity of the cooperators. Figure <ref>.b shows the evolution of the density near the exit for the mixed system for various values of N_c, considering the rest of the parameters as in Fig. <ref>.a. The results are analogous to those found in Fig. <ref>. Again, the presence of cooperators decreases not only the exit time but also the density of pedestrians near the door. In Figure <ref>, we summarize the results for the exit time of the systems with imitation of cooperative behavior analyzed in this section. Fig. <ref>.a considers the case of cooperators defined through v_d. A contour plot of the exit time as a function of N_c and v_d,c is shown. Meanwhile, Fig. <ref>.b shows the exit time as a function of N_c and A_c for the case of cooperators defined through the latter parameter. In both cases, we find a point of minimal exit time. Additionally, regions of relatively low exit time with not very large values of N_c can be clearly identified. §.§ The case of long rooms Here, we focus on evacuations of long rooms or corridors, where, as we will see, the dynamics may result differently than for a square room. We consider a long rectangular room with dimensions L=5m and H=180m, which have the same area as the square room considered in the previous sections. First, we study the dynamics of evacuation for pure cooperative, pure egotistic, and mixed populations with the same parameters as those considered in Fig. <ref>.c. so that the only thing that changes is the aspect ratio of the room. Figure <ref>.a shows the evacuation time for the three types of populations as a function of the number of agents added. It is worth noting that, in contrast to what happens for the square room, here the evacuation time for the pure cooperative population with N_c=250 (N_e=0) is larger than that for the pure competitive one with N_e=250 (N_c=0). This can also be seen in Fig. <ref>.e by comparing the blue and red spots denoting the parameters for cooperative and egotistic agents, respectively. (See the spots at v_d=1 m/s and v_d=3 m/s for different values of L.) In Fig. <ref>.a it is also notable that the evacuation time for the cooperative population is independent of N_c within the range studied. This latter effect is due to the fact that, in such a long room, the cooperators are in the faster-is-faster regime. Hence, they perform an essentially free walk with almost no physical contact and no clogs. The evacuation time is just the time it takes an agent located at the back of the room to reach the exit walking at v_d,c. However, the situation may change for a larger number of agents since, as the initial density increases, clogs would begin to occur and the evacuation time should grow. An interesting point of the results in <ref>.a is that, although the cooperators have a larger evacuation time than the egotistic agents, the addition of a small number of cooperators to the competitive population reduces considerably the effective evacuation time of the latter. The maximal reduction, of more than 30%, is observed for N_a∼ 10. This optimal value of N_a is remarkably smaller than those found in the square room, and actually rather small compared to N_e=250. A noteworthy fact observed in the simulations (see video 2 in <cit.>) is that, due to the small width of the room, single isolated cooperative agents induce the formation of slow moving clusters composed of imitators. Such clusters partially block the advance of fast egotistic pedestrians, decreasing their effective flux towards the exit and reducing clogging at the exit. This effect helps to lower the evacuation time. However, it is important to remark that we say that clusters partially block the advance of egotistic agents because these fast agents are occasionally capable of overtaking the slow clusters and even forming fast corridors at one side of a cluster. The curves for the evolution of the density near the exit depicted in Fig. <ref>.b help us to better understand all the phenomena. It can be seen that when passing from N_c=0 to N_c=10, there is a strong decrease in clogging around the exit. Still, with such a low density of cooperators, a portion of egotistic agents manage to arrive at the door rapidly and to escape with small difficulties due to the relatively low clogging, therefore optimizing the flux. If the number of cooperators is further increased (see curves for N_c=30, 60 in Fig.<ref>.b), clogging is reduced even more. However, in such situations, most of the competitive agents behave cooperatively and they advance slowly to the door, thus slowing down the evacuation process. The optimal value of N_c is obtained through a competence between the reduction of clogging caused by the partial blockage of the advance of the fast-moving agents and the decreasing of the number of fast-moving agents, both effects increasing with N_c. A last fact to note about the results in <ref>.b is that the maximum of the density around the door is obtained at a much larger time that in the square room (compare to <ref>.c). While in the latter case, the initial rapid growth of the density (lasting for about 10 seconds) gets the system to the maximal density and is then followed by a slow decay, in the long room, the initial rapid growth is not that fast, and after it, the density continues to grow slowly until reaching a maximum at about 60s. Such a difference between the density profiles of the square room and those of the long room occurs even for the case N_c=0. Therefore, it should be mainly caused by the geometrical characteristics of the room. Note that in the long room, the growth of the density around the door is due to the quasi one-dimensional flow of agents that approach the exit essentially from a single direction. In contrast, in the square room, pedestrians arrive at the door from all directions (with angle of incidence ranging from 0 to π). Now we turn to the case in which the cooperative agents are distinguished by their values of A. The results for the long room are shown in Fig. <ref> and need to be compared with those in Fig. <ref>, as the systems being analyzed differ only in the aspect ratio of the room. We observe that, similar to the square room, the exit time for pure cooperators in the long room is smaller than that for pure competitive agents. However, the two main effects of changing the geometry on the results for mixed populations are the same. Specifically, the optimal value of N_c in the long room (Fig. <ref>.a) is much smaller than in the square room (Fig. <ref>.a), and the maxima of the density profiles are achieved at much larger times compared to the square room (compare Fig. <ref>.a to Fig. <ref>.b). It should be mentioned, however, that in contrast to the case analyzed in Fig. <ref>, when cooperative agents are defined by the value of A, there is no formation of slow-moving clusters, and the reduction in evacuation time is due only to the lower density observed in the exit zone. §.§ Mixed populations with combined cooperative behaviors Up to now, we have considered separately the cases of patient (small v_d) and cautious (large A) cooperative agents since each one is related to a particular model parameter. However, it is reasonable to think that cooperative pedestrians in real life would combine patience and cautiousness in diverse manners. To check the consistency of our results, we have performed two additional types of simulations in which these prototypical cooperative attitudes appear combined in different ways. On the one hand, we considered simulations of mixed populations that include the three types of agents together, i.e. competitive agents, patient cooperative agents and cautious cooperative agents. On the other hand, we performed simulations of populations with a single type of cooperative agents but considering that each cooperative agent has both characteristic together (small v_d and large A). Excepting for the imitation mechanism, the assumptions for the three species dynamics are the same as those for two species. In order to define the imitation mechanism for the three species system we consider a majority game as follows. Lets assume that at a given time there are n_1 patient cooperative pedestrians and n_2 cautious cooperative pedestrians inside the imitation radius of a certain competitive agent. Then, if n_1>n_2 (n_2>n_1) the competitive pedestrian adopts the parameters of the patient (cautious) agents. Meanwhile, if n_1=n_2>0 there are two possibilities. First, if before the equality begins to hold, the competitive agent was imitating a given type of cooperative pedestrian, then it persists imitating the same type. Second, if the competitive agent was not imitating, it chooses at random the cooperative behavior to imitate. For instance, if at a given time we have n_1=1 and n_2=0 (thus the competitive agent is behaving as patient) and suddenly a cautious agent enters the circle so that we turn to have n_1=n_2=1, then the competitive agent continues behaving as patient. On the other hand, if the competitive agent is not imitating because we have n_1=n_2=0 and, suddenly, two cooperative agents of different type enter the circle at the same time step, the competitive agent chooses at random between the two cooperative behaviors. We remark that a competitive pedestrian behaves as competitive at any time for which n_1=n_2=0 holds. For our simulations with three species we consider the same square room as in previous sections with a number N_c1 of patient cooperative agents (parameters v_d,c1<v_d,e, A_c1=A_e), N_c2 cautious cooperative agents (parameters v_d,c2=v_d,e, A_c2>A_e) and N_e competitive pedestrians. In all the cases we assume N_c1=N_c2 and vary the total number of cooperative agents N_c=N_c2+N_c2. In Fig. <ref>.a we show the evacuation time for the three species system as a function of the number of added cooperative agents and compare with the results for the pure competitive systems with the same total number of agents. The results for a pure cooperative system with the two types of cooperative agents are also shown for the sake of comparison. We see that the inclusion of a mix of the two types of cooperative agents into the competitive population leads to the same type of behavior observed in systems for which all the cooperative agents are of the same type (compare with Figs. <ref> for patient agents and Fig. <ref> for cautious ones). Figure <ref>.b shows the density near the exit as a function of time for various values of the total number of cooperators. Again, as in Figs. <ref> and Fig. <ref>.b we find that the presence of cooperators reduces the density near the exit. Note that the minimum in the exit time for the three species system found in Fig. <ref>.a is not as deep as in other previously shown examples, and the reduction of density is not so notable. This is because we have chosen a moderate degree of cooperation for both types of cooperative agents, as done for instance in Fig. <ref>.a for the case of patient cooperative behavior. Further decreasing of v_d,c1 or increasing of A_c2 would lead to more notable effects. The reason why we have chosen such relatively small degrees of patience and cautiousness for the cooperators in the previous example is that we want to compare with the case in which all the cooperators are at the same time patient and cautious, with the same parameters. For this we consider a system with a single type of cooperators with parameters v_d,c2<v_d,e and A_c2>A_e. The imitation mechanism in this case is the same as that considered before for a single species of cooperative agents, but now, when a competitive pedestrian imitates a cooperative agent, it modifies both its value of v_d and A. The results for the evacuation time and the density near the exit for this system are shown in Fig. <ref>.c and Fig. <ref>.d, respectively. As expected, the effects of reduction of T and decreasing of the density are enhanced with respect to the example analyzed in Figs. <ref> a and Fig. <ref>. b, because now all the cooperative agents share the two characteristics. However, the main point here is that, by analyzing two different forms of combining the two types of cooperative behaviors considered, the results in this section show the consistency and robustness of our previous conclusions regarding the effect of imitation of cooperation. §.§ Mixed populations with no imitation The results in previous sections for the social force model indicate that the evacuation of a competitive crowd can be eased by the addition of cooperative agents in the case that the cooperative behavior is imitated by ordinary (competitive) pedestrians. The maximal reduction of the median of the evacuation time found was of the order 10% to 50%, depending on the parameters and geometries. Previous studies <cit.> that considered agent-based and cellular-automata models for analyzing the effect of the addition of cooperative agents into a crowd of competitive ones without imitation reported much smaller reductions of the exit time, of the order of 3% or smaller. For the sake of comparison and completeness, here we explore the case without imitation using the social force model. In Fig. <ref>, we show results for the exit time and density for systems with the same parameters as those used in Fig. <ref>.c (and <ref>.c) but considering no imitation. At a first glance, the results in Fig. <ref>.a indicate that the exit time for the mixed system with no imitation (orange curve) increases essentially monotonically with the number of added cooperative agents. Note however that, due to the error associated to the calculation of the median, we cannot rule out the possible existence of a tiny reduction of order ∼ 1% in the median at small values of N_a, (i.e. a minimum in the orange curve at a non vanishing value of N_a). This would be in agreement with the findings in <cit.>. Unfortunately, to verify this would require a very large number of simulations (∼ 5000 realizations instead of 50 to reduce the error of the median in one order of magnitude). We find this unnecessary because the effect, if existent, has limited statistically significant since it would correspond to a decreasing of ∼ 1% for the median while the iterquartile distance (the width of the orange shaded area) is of the order of ∼ 10%. Still, we have some relevant things to mention. Regarding the results in Fig. <ref>.a, it is important to emphasize that, taking the system with N_e=250 and N_c=0 as starting point, the addition of cooperative agents results in a much slower growth of the exit time compared to the addition of competitive pedestrians. In other words, at a constant total number of agents, it is always advantageous to have a fraction of them behaving as cooperative. This is in agreement with the findings in <cit.>. Regarding Fig. <ref>.b we see that the density in the exit zone results essentially independent of the number of added agents (compare to Fig. <ref>.c), except at the end of the evacuation. Hence, no reduction of the pressure felt by the agents is expected. Again, the results are noisy and much more detailed calculations would be needed in order to determine whether there is a small reduction of density or not for a small for small values of N_c, but in any case the effect would be of very limited significance compared to that found in the cases with imitation. As part of our studies, we have also analyzed mixed systems with no imitation with the cooperative behavior defined by the parameter A (with the parameters as in Fig. <ref>). The obtained curves (not shown) are analogous to those in Fig. <ref>. The conclusions are the same as for systems with cooperation defined through the parameter v_d. Moreover, we have studied mixed systems with no imitation in long rooms with parameters as in Figs. <ref> and <ref>, and arrived at the same main conclusion: the addition of cooperative agents that are not imitated by competitive pedestrians does not reduce significantly the exit time T of the original crowd within the social force model, but it produces a slower growth of T compared to adding competitive pedestrians. § FINAL REMARKS AND CONCLUSIONS By considering a standard version of the social force model, we have studied the effect of cooperation and imitation of cooperative behaviors in various scenarios of pedestrian evacuation. Our results show that the addition of a relatively small number of cooperative agents, whose behavior is imitated by the pedestrians of a crowd, can reduce the evacuation time of the crowd and the density near the exit door. This means faster and safer evacuations. The results depend on the aspect ratio of the room, with notable effects observed in long rooms such as corridors when a small number of cooperative agents are added. This highlights the influence of the room shape and geometries, especially in the presence of populations with inhomogeneous behaviors. It is important to stress the fact that the reduction of the evacuation time obtained by adding cooperative agents in the system with imitation occurs despite the total number of agents being larger. This means, more pedestrians evacuate faster if some of them are cooperative. In the absence of imitation, the reduction is not significant within our model. However, the addition of cooperative agents produces an increase that is smaller than that caused by the augmentation of the number of competitive agents. Hence, at constant total number of agents, the presence of cooperative agents is always desirable. In most of our studies, we have considered separately the cases of patient (small v_d) and cautious (large A) cooperative agents. However, as it is reasonable to think that cooperative pedestrians in real life would combine patience and cautiousness, we have verified that different combination does not modify our main conclusions. For this we have performed simulations with three types of agents and other simulations in which cooperative agents are both patient and cautious. In all the cases we find that the addition of cooperative agents leads to faster evacuations with reduced density near the door. Our results highlight the relevance of cooperative attitudes in facilitating evacuations, particularly emphasizing the advantages of achieving the imitation of cooperative behaviors within a crowd. Interestingly, as mentioned in the introduction, there is evidence that cooperative attitudes emerge in dangerous or emergency situations (see Ref. <cit.> and references therein). It's difficult to determine how many potential tragedies in crowds have been averted by this spontaneous mechanism. Certainly, not enough to prevent all, as tragedies occur occasionally. Hence, it is reasonable to think that educating and training people on appropriate protocols for individual behavior within crowds may help the spontaneous tendency to cooperate and may further contribute to avoiding tragedies. A straightforward interpretation of our results suggests that people should be taught to reduce haste and enhance cautiousness in emergency situations involving crowds, by trying for instance to advance more slowly and to leave more space between pedestrians, avoiding pushing. Moreover, they should also be trained to try to persuade their neighbors to adopt similar behaviors and to remember that if neighbors are behaving collaboratively in a dangerous situation, one should also strive to do the same. While this may hold true, such a perspective could be overly simplistic or naive. Things are probably not so straightforward. Since our conclusions are limited to the predictions of the social force model, further research involving numerical simulations with other models, controlled experiments, and analysis of real-life scenarios should be conducted before making concrete decisions on education. Additionally, considerations of individual and mass psychology should be taken into account. Protocols for individual behavior within congested crowds should be designed by interdisciplinary teams. Furthermore, the recommended protocol may depend on the particular type of scenario and society. Nevertheless, even though many more studies are needed, our work aims to draw attention to the importance of designing protocols and educating people on how to behave within congested crowds. Our studies suggest that even if only a limited fraction of pedestrians remain calm and are able to induce calmness in others, the risks can be considerably reduced. It is interesting to note that protocols and instructions for different types of emergencies usually require individuals to behave in ways that go against their instincts. For instance, during depressurization events in planes, adults responsible for children must put on their own oxygen masks before assisting the kids. Most adults accustomed to flying are aware of this recommendation and probably would follow it in an emergency, even though it seems to be rather against the natural instincts of most mothers and fathers. Similarly, people who walk in wild areas and National Parks know that, in the case of being charged by a bear, they should lie on the ground and play dead if the bear is brown, but they should act as being as big as possible and fight back in the case that the bear was black. If people can learn and follow these instructions, we believe that they could also be trained on how to behave within a high-density enclosed crowd, even if the convenient behavior may be counter to their instincts of running, pushing and escaping. § AKNOWLEDGEMENTS The authors would like to thank the financial support from CNEA and CONICET, both Argentinean public institutions. 49 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL [Helbing et al.(2005)Helbing, Buzna, Johansson, and Werner]helbing2005self authorD. Helbing, authorL. Buzna, authorA. Johansson, and authorT. Werner, journalTransportation science volume39, pages1 (year2005). [Coscia and Canavesio(2008)]coscia2008first authorV. Coscia and authorC. Canavesio, journalMathematical Models and Methods in Applied Sciences volume18, pages1217 (year2008). [Moussaïd et al.(2010)Moussaïd, Perozo, Garnier, Helbing, and Theraulaz]moussaid2010walking authorM. Moussaïd, authorN. Perozo, authorS. Garnier, authorD. Helbing, and authorG. Theraulaz, journalPloS one volume5, pagese10047 (year2010). [Bellomo et al.(2013)Bellomo, Bellouquid, and Knopoff]bellomo2013microscale authorN. Bellomo, authorA. Bellouquid, and authorD. Knopoff, journalMultiscale Modeling & Simulation volume11, pages943 (year2013). [Cristiani et al.(2014)Cristiani, Piccoli, and Tosin]cristiani2014multiscale authorE. Cristiani, authorB. Piccoli, and authorA. Tosin, titleMultiscale modeling of pedestrian dynamics, vol. volume12 (publisherSpringer, year2014). [Johansson et al.(2008)Johansson, Helbing, Al-Abideen, and Al-Bosta]Johansson2008crowd authorA. Johansson, authorD. Helbing, authorH. Z. Al-Abideen, and authorS. Al-Bosta, journalAdvances in Complex Systems volume11, pages497 (year2008). [Bellomo et al.(2016)Bellomo, Clarke, Gibelli, Townsend, and Vreugdenhil]bellomo2016human authorN. Bellomo, authorD. Clarke, authorL. Gibelli, authorP. Townsend, and authorB. Vreugdenhil, journalPhysics of life reviews volume18, pages1 (year2016). [Huang and Guo(2008)]huang2008static authorH.-J. Huang and authorR.-Y. Guo, journalPhysical Review E volume78, pages021131 (year2008). [Heliövaara et al.(2012a)Heliövaara, Kuusinen, Rinne, Korhonen, and Ehtamo]heliovaara2012pedestrian authorS. Heliövaara, authorJ.-M. Kuusinen, authorT. Rinne, authorT. Korhonen, and authorH. Ehtamo, journalSafety science volume50, pages221 (year2012a). [Guo et al.(2012)Guo, Huang, and Wong]guo2012route authorR.-Y. Guo, authorH.-J. Huang, and authorS. Wong, journalTransportation research part B: methodological volume46, pages669 (year2012). [Garcimartín et al.(2014)Garcimartín, Zuriguel, Pastor, Martín-Gómez, and Parisi]GARCIMARTINEXP authorA. Garcimartín, authorI. Zuriguel, authorM. Pastor, authorC. Martín-Gómez, and authorD. Parisi, journalTransportation Research Procedia volume6, pages760 (year2014). [Garcimartín et al.(2016)Garcimartín, Parisi, Pastor, Martín-Gómez, and Zuriguel]GARCIMARTIN authorA. Garcimartín, authorD. Parisi, authorJ. Pastor, authorC. Martín-Gómez, and authorI. Zuriguel, journalJournal of Statistical Mechanics: Theory and Experiment volume2016, pages043402 (year2016). [Von Krüchten and Schadschneider(2017)]von2017empirical authorC. Von Krüchten and authorA. Schadschneider, journalPhysica A: Statistical Mechanics and its Applications volume475, pages129 (year2017). [Nicolas et al.(2017)Nicolas, Bouzat, and Kuperman]NICOLAS201730 authorA. Nicolas, authorS. Bouzat, and authorM. N. Kuperman, journalTransportation Research Part B: Methodological volume99, pages30 (year2017), ISSN issn0191-2615, <https://www.sciencedirect.com/science/article/pii/S0191261516307548>. [Adrian et al.(2020)Adrian, Boltes, Holl, Sieben, and Seyfried]seyfried authorJ. Adrian, authorM. Boltes, authorS. Holl, authorA. Sieben, and authorA. Seyfried, journalCollective Dynamics volume5, pages189 (year2020). [Helbing(1998)]HELBFLUID authorD. Helbing, journalComplex Systems volume6 (year1998). [Bouzat and Kuperman(2014)]BOUZATGAMETHEORY authorS. Bouzat and authorM. Kuperman, journalPhysical review. E, Statistical, nonlinear, and soft matter physics volume89, pages032806 (year2014). [Nicolas et al.(2016)Nicolas, Bouzat, and Kuperman]nicolascaut authorA. Nicolas, authorS. Bouzat, and authorM. N. Kuperman, journalPhysical Review E volume94, pages022313 (year2016). [Dongme et al.(2017)Dongme, Wenyao, and Binghong]DONGME authorS. Dongme, authorZ. Wenyao, and authorW. Binghong, journalJournal of Statistical Mechanics: Theory and Experiment volume2017, pages043407 (year2017). [Heliövaara et al.(2012b)Heliövaara, Korhonen, Hostikka, and Ehtamo]heliovaara2012counterflow authorS. Heliövaara, authorT. Korhonen, authorS. Hostikka, and authorH. Ehtamo, journalBuilding and Environment volume48, pages89 (year2012b). [Batty(1993)]BATTY authorM. Batty, journalBatty, Michael (2003) Agent-based pedestrian modelling. Working paper. CASA Working Papers (61). Centre for Advanced Spatial Analysis (UCL), London, UK. volume28 (year1993). [Dossetti et al.(2017)Dossetti, Bouzat, and Kuperman]victor authorV. Dossetti, authorS. Bouzat, and authorM. Kuperman, journalPhysica A volume479, pages193–202 (year2017). [Helbing and Molnár(1995)]helb1995 authorD. Helbing and authorP. Molnár, journalPhys. Rev. E volume51, pages4282 (year1995). [Helbing et al.(2000)Helbing, Farkas, and Vicsek]helb2000 authorD. Helbing, authorI. Farkas, and authorT. Vicsek, journalNature volume407, pages487 (year2000). [Frank and Dorso(2011b)]FRANK20112135 authorG. Frank and authorC. Dorso, journalPhysica A: Statistical Mechanics and its Applications volume390, pages2135 (year2011b), ISSN issn0378-4371, <https://www.sciencedirect.com/science/article/pii/S0378437111000793>. [Cornes et al.(2021)Cornes, Frank, and Dorso]cornes authorF. Cornes, authorG. Frank, and authorC. Dorso, journalPhysica A: Statistical Mechanics and its Applications volume568, pages125744 (year2021). [Alexandre Nicolas and Bouzat(2018)]nicolas2018 authorA. Nicolas authorS. Ibáñez authorM. Kuperman, and authorS. Bouzat, journalJ.Stat.Mech pages083403 (year2018). <https://doi.org/10.1088/1742-5468/aad6c0> [Cornes et al.(2019)Cornes, Frank, and Dorso]CORNESPANIC authorF. Cornes, authorG. Frank, and authorC. Dorso, journalSimulation Modelling Practice and Theory volume95 (year2019). [Cao et al.(2021)Cao, Lee, Yuen, Chen, De Cachinho Cordeiro, Shi, Xie, and Yeoh]CAOSFM authorJ. R. Cao, authorE. Lee, authorA. C. Y. Yuen, authorT. Chen, authorI. M. De Cachinho Cordeiro, authorM. Shi, authorW. Xie, and authorG. Yeoh, journalSimulation Modelling Practice and Theory volume109, pages102309 (year2021). [Cheng and Zheng(2018)]CHENG2018485 authorY. Cheng and authorX. Zheng, journalApplied Mathematics and Computation volume320, pages485 (year2018), ISSN issn0096-3003. [Guan and Wang(2020)]GUAN2020124865 authorJ. Guan and authorK. Wang, journalApplied Mathematics and Computation volume369, pages124865 (year2020), ISSN issn0096-3003. [Zheng et al.(2020)Zheng, Zhu, Sun, Wang, and Li]ZHENG authorZ. Zheng, authorG. Zhu, authorZ. Sun, authorZ. Wang, and authorL. Li, journalIEEE Access volume8, pages195989 (year2020). [Xu et al.(2021)Xu, Wang, Li, Wang, and Wang]XU authorS. Xu, authorJ. Wang, authorJ. Li, authorY. Wang, and authorZ. Wang, journalJournal of Loss Prevention in the Process Industries volume72, pages104556 (year2021). [Elzie et al.(2016)Elzie, Frydenlund, Collins, and Robinson]ELZIE authorT. Elzie, authorE. Frydenlund, authorA. Collins, and authorR. Robinson, journalTransportation Research Record: Journal of the Transportation Research Board volume2586, pages1 (year2016). [Wang et al.(2022)Wang, Chen, Chen, Deng, and Wang]Wang_2022 authorG.-N. Wang, authorT. Chen, authorJ.-W. Chen, authorK. Deng, and authorR.-D. Wang, journalChinese Physics B volume31, pages060402 (year2022). [Ma et al.(2022)Ma, Liu, Huo, and Li]Ma authorY. Ma, authorX. Liu, authorF. Huo, and authorH. Li, journalSustainability volume14, pages5278 (year2022). [Rozan et al.(2022)Rozan, Frank, Cornes, Sticco, and Dorso]rozan authorE. Rozan, authorG. Frank, authorF. Cornes, authorI. Sticco, and authorC. Dorso, journalPhysica A: Statistical Mechanics and its Applications volume597, pages127271 (year2022), ISSN issn0378-4371. [Cornes et al.(2023)Cornes, Frank, and Dorso]CORNES2023106218 authorF. Cornes, authorG. Frank, and authorC. Dorso, journalSafety Science volume166, pages106218 (year2023), ISSN issn0925-7535, <https://www.sciencedirect.com/science/article/pii/S0925753523001601>. [Sticco et al.(2020)Sticco, Frank, Cornes, and Dorso]STICCO202042 authorI. Sticco, authorG. Frank, authorF. Cornes, and authorC. Dorso, journalSafety Science volume121, pages42 (year2020), ISSN issn0925-7535, <https://www.sciencedirect.com/science/article/pii/S0925753519305661>. [Swope et al.(1982)Swope, Andersen, Berens, and Wilson]vel-verlet authorW. Swope, authorH. Andersen, authorP. Berens, and authorK. Wilson, journalThe Journal of Chemical Physics volume76 (year1982). [Hou et al.(2014)Hou, Liu, Pan, and Wang]HOU authorL. Hou, authorJ.-G. Liu, authorX. Pan, and authorB.-H. Wang, journalPhysica A: Statistical Mechanics and its Applications volume400, pages93–99 (year2014). [Lopez-Carmona and Paricio Garcia(2022)]LOPEZCARMONA authorM. A. Lopez-Carmona and authorA. Paricio Garcia, journalTransportation Research Part C: Emerging Technologies volume140, pages103699 (year2022), ISSN issn0968-090X, <https://www.sciencedirect.com/science/article/pii/S0968090X22001383>. [Pelechano and Badler(2006)]PELCHANO authorN. Pelechano and authorN. Badler, journalComputer Graphics and Applications, IEEE volume26, pages80 (year2006). [Sticco et al.(2017)Sticco, Cornes, Frank, and Dorso]sticco authorI. Sticco, authorF. Cornes, authorG. Frank, and authorC. Dorso, journalPhysical Review E volume96 (year2017). [Vanumu et al.(2017)Vanumu, Rao, and Tiwari]funddiag authorL. D. Vanumu, authorK. R. Rao, and authorG. Tiwari, journalEur. Transp. Res. Rev. volume9, pages1 (year2017). [Seyfried et al.(2005)Seyfried, Steffen, Klingsch, and Boltes]SEYFRIED1 authorA. Seyfried, authorB. Steffen, authorW. Klingsch, and authorM. Boltes, journalJournal of Statistical Mechanics: Theory and Experiment volume10 (year2005). [Cao et al.(2017)Cao, Seyfried, Zhang, Holl, and Song]CAO1 authorS. Cao, authorA. Seyfried, authorJ. Zhang, authorS. Holl, and authorW. Song, journalJournal of Statistical Mechanics: Theory and Experiment volume2017, pages033404 (year2017). [Cornes et al.(2017)Cornes, Frank, and Dorso]cornes-fallen authorF. Cornes, authorG. Frank, and authorC. Dorso, journalPhysica A: Statistical Mechanics and its Applications volume484 (year2017). [sup()]supmatamir howpublishedSee Suplemental Material at <URL_will_be_inserted_by_publisher> for videos of the simulations. [Beverloo et al.(1961)Beverloo, Leniger, and van de Velde]beverloo authorW. Beverloo, authorH. Leniger, and authorJ. van de Velde, journalChemical Engineering Science volume15, pages260 (year1961), ISSN issn0009-2509.
http://arxiv.org/abs/2405.09438v1
20240515152935
Perturbed Integrators Chain Control via Barrier Function Adaptation and Lyapunov Redesign
[ "Manuel A. Estrada", "Claudia A. Pérez-Pinacho", "Christopher D. Cruz-Ancona", "Leonid Fridman" ]
eess.SY
[ "eess.SY", "cs.SY" ]
[footnoteinfo]Corresponding author a]Manuel A. Estrada b]Claudia A. Pérez-Pinacho b]Christopher D. Cruz-Anconafootnoteinfo a]Leonid Fridman [a]Facultad de Ingeniería, Universidad Nacional Autónoma de México (UNAM), Ciudad de México 04510, México (manuelestrada@comunidad.unam.mx; lfridman@unam.mx) [b] Tecnologico de Monterrey, School of Engineering and Sciences, Ave Eugenio Garza Sada 2501, Monterrey, Nuevo Leon, 64849 (christopher.cruz.ancona@tec.mx; caperezpinacho@tec.mx) Lyapunov redesign; Barrier function-based sliding mode control; Adaptive control; Reaching phase. Lyapunov redesign is a classical technique that uses a nominal control and its corresponding nominal Lyapunov function to design a discontinuous control, such that it compensates the uncertainties and disturbances. In this paper, the idea of Lyapunov redesign is used to propose an adaptive time-varying gain controller to stabilize a class of perturbed chain of integrators with an unknown control coefficient. It is assumed that the upper bound of the perturbation exists but is unknown. A proportional navigation feedback type gain is used to drive the system's trajectories into a prescribed vicinity of the origin in a predefined time, measured using a quadratic Lyapunov function. Once this neighborhood is reached, a barrier function-based gain is used, ensuring that the system's trajectories never leave this neighborhood despite uncertainties and perturbations. Experimental validation of the proposed controller in Furuta's pendulum is presented. § INTRODUCTION Lyapunov redesign is a classical robustification technique in which an additional discontinuous control is designed to compensate matched uncertainties and external disturbances. Such approach uses the knowledge of a nominal control and its corresponding Lyapunov function to design an appropriate switching manifold. The discontinuous control using such manifold ensures the negativity of the nominal Lyapunov function if the upper-bound of the perturbations is known <cit.><cit.>. Since the upper bound of the perturbation is usually unknown or overestimated, adaptive gains in controllers are needed to stabilize the system <cit.><cit.>. In adaptive strategies for systems with an unknown upper bound of the perturbation and unknown control gain, three different issues should be solved simultaneously: (i) To confine the trajectories of the system into a prescribed neighborhood of the origin before a predefined time moment. (ii) To update the controller's gain, once a prescribed neighborhood is reached, the system's trajectories are confined in this region while the controller signal compensates for the perturbation. (iii) To produce a bounded continuous control signal. Recently, explicit time-varying controllers have been proposed in <cit.> that can be used to solve the problem (i) for the stabilization of a perturbed chain of integrators. The first approach of this type <cit.>, so-called Proportional Navigation Feedback (PNF), ensures convergence in prescribed time with a proportional controller with a time-varying gain that becomes unbounded as the solution tends to zero at the prescribed time, that is why an absolutely continuous solution might not be defined at the convergence time, but in a generalized sense <cit.>. Consequently, adding measurement noise worsen regulation accuracy at the prescribed time, see <cit.>. To solve the problem (i), the works <cit.> take advantage of the time-varying gains to redesign the fixed-time controllers ensuring the convergence to the origin before the predefined convergence time moment and maintain the solutions therein for all future times. However, the usage of these controllers does not consider the case when the control coefficient is uncertain, and to maintain the solution in the origin, these controllers require the knowledge of the upper bound of perturbations. Another strategy is to drive the system's trajectories to an arbitrary neighborhood of the origin before the predefined time with time-varying feedback <cit.> and stay therein for all future times. The paper <cit.> uses a perturbation estimator in combination with a performance function-based controller to render a relative degree two system practical stabilizable within an assigned reaching time. The paper <cit.> considers parametric uncertainty; its effectiveness relies on an Artificial Neural Network, which can be computationally expensive, and this approach is developed only for systems of order two. Thus, ensuring the problem (i) solution for arbitrary relative degree perturbed systems is still an open problem. There are a lot of sliding mode approaches <cit.> to solve the problem (ii) keeping the trajectories in a neighborhood of the sliding set, and then fixing <cit.> or reducing <cit.> the gains of the controller. However, in such approaches, a control law is discontinuous, which produces the chattering effect, high-frequency oscillations that may damage systems. Other approaches ensure a prescribed neighborhood of a sliding set via barrier functions <cit.>, or monitoring functions <cit.>. An important drawback of the previous works is that, due to the absence of knowledge on the upper bound of the perturbations, the reaching phase for arrival into a neighborhood of the origin of the state space cannot be predefined. In the paper <cit.>, it is shown that a barrier function-based adaptation of Lyapunov redesign is not straightforward because to ensure the property (i) it is necessary to show that the system’s trajectories converge uniformly into a priori predefined vicinity of origin for all system’s states in a predefined time. Whether convergence to a neighborhood of a sliding set can be ensured by using a uniform reaching phase strategy in <cit.>, the uniform convergence from the point of arrival on the sliding set to the predefined vicinity of the origin cannot be ensured. On the other hand, <cit.> presents a Higher-Order Sliding Mode Control using barrier functions for systems of arbitrary order. However, the homogeneous topology induced by the Lyapunov function makes it difficult to prescribe the behavior of the system's states, and the time of convergence to the homogeneous vicinity is not predefined, see the simulation example in <cit.>. The authors of <cit.> present an adaptive strategy based on the combination of two different controllers: monitoring and barrier functions, providing a predefined upper bound of the settling time to a vicinity of the sliding surface with relative degree one. Moreover, in <cit.>, the uncertainties in the control coefficient are not considered. This paper introduces an adaptive control combining a time-varying feedback gain with barrier function adaptive gain for a perturbed chain of integrators with unknown upper bounds of the perturbations and unknown control coefficient based on Lyapunov redesign <cit.>,<cit.>. This paper contributes to solving (i)-(iii) with just one controller with a unique switch in the controller's gain: * A PNF type gain is used to reach a prescribed neighborhood of the origin in a predefined time (in the sense of <cit.>), measured in terms of a quadratic Lyapunov function. The singularity of the time-varying feedback control is avoided because the adaptive barrier function-based control is switched on when the system's solutions reach the interior of the prescribed neighborhood of the origin. * A barrier function-based gain allowing the designer to prescribe the desired neighborhood in terms of the same Lyapunov function, where the trajectories are kept, is presented. Once this neighborhood is reached, the provided gains ensure that the control signal follows the perturbations. * The usage of the topology generated by the same Lyapunov function simplifies the switch between the gains allowing to make it in one step. Moreover, the control signal is continuous and bounded, except at the switching moment. * Finally, proposed results are validated experimentally in the Furuta pendulum system. The organization of the paper is as follows: In Section <ref>, the problem statement is presented. Section <ref> contains the control law design and main result of the paper. Numerical simulations are shown in Section <ref>. An experimental result on Furuta's pendulum is given in Section <ref>. The paper closes with some concluding remarks in Section <ref>. Technical proofs are given in Appendix <ref>. Notation. The symbols ‖·‖ and |·| denote the Euclidean norm and the absolute value of a vector and a scalar value, respectively. The sign function on the real line is defined by sign(z) = z| z | for z ≠ 0 and sign(0) = [-1, 1]. For a real square matrix G, λ_min(G) (resp. λ_max(G)) denotes the minimum (resp. maximum) eigenvalue of G. § PROBLEM STATEMENT We address the robust stabilization of the perturbed chain of integrators of the form ẋ(t) =J_nx(t)+e_n[ b(t)(1+δ_b(t)) u(t)+f(t) ] x(0) =x_0 , where x(t)∈ℝ^n denotes the state of the system, u(t) ∈ℝ is the control input, (e_i)_1≤ i ≤ n denotes the canonical basis of ℝ^n, J_n denotes the n-th Jordan block (i.e., J_ne_i=e_i-1 for 1≤ i ≤ n with e_0=0) and b(t) is the known part of the control coefficient, without loss of generality assume that b(t)>b>0. The perturbations f:ℝ_+→ℝ and δ_b(t):ℝ_+ →ℝ are measurable functions in t where f(t) acquaints for external perturbations to the system and δ_b(t) is the unknown control coefficient. In the sense of quadratic stabilizability <cit.>, without the presence of external perturbations, the stabilization problem of (<ref>) is readily solved by using a linear state feedback. In fact, exponential stabilization of the closed-loop system is ensured and its trajectories can be confined inside any neighborhood of the origin in a finite time. We assume the following For all t∈ℝ_+, there exist unknown constants M>0 and 0≤ε_b<1 such that | f(t) |≤ M and |δ_b(t)|≤ε_b. Notice, however, that under the above assumption, a linear control law for system (<ref>) can only ensure ultimate boundedness of the trajectories. Considering the control law u(t)= -1/2( 1/γ+1) b^-1(t)e_n^TPx(t) , where γ∈(0,1+√(5)/2] and P is solution of the Algebraic Ricatti Equation (ARE) J_n^TP + PJ_n -γ Pe_ne_n^TP + Q=0, Q>0 , There exists a value γ^* such that the solution of (<ref>) exists for all γ∈(0, γ^*); see <cit.>. Suppose that Assumption <ref> is fulfilled. Let μ := 2Mλ_max(P)/(θλ_min(Q)) for some 0<θ<1, there is T^*:=T^*(μ,x_0) such that for all t≥ T^*, trajectories of system (<ref>) in closed-loop with (<ref>) satisfy ‖ x(t)‖≤√(λ_min(P)/λ_max(P))μ . The stabilization problem solved in Proposition <ref> has two main disadvantages: * The ultimate bound depends on the unknown perturbation's upper bound, * The time to reach such ultimate bound grows without bound with the initial conditions and the perturbation's upper bound. To overcome the above dependency of the perturbation's upper bound on the ultimate bound, one can redesign the control law such that it compensates the perturbations while ensuring exponential stabilization. Following the Lyapunov redesign methodology <cit.>, one can add a robustifying term to the control law in Proposition <ref>. Consider the redesigned control law u = -1/2( 1/γ+1)b(t)^-1e_n^TPx - ρ b(t)^-1sign(1/2( 1/γ+1) e_n^TPx) where P is solution to the ARE (<ref>) and 0<γ≤1+√(5)/2. The following result can be obtained. Suppose that Assumption <ref> is fulfilled. Let ρ≥ M/(1-ε_b) and given 0≤μ^*<‖ x_0 ‖, there is T̅^*=T̅^*(x_0,μ^*) such that the trajectories of the closed-loop system (<ref>)-(<ref>) satisfy ‖ x(t)‖≤μ^* for all t≥T̅^* However, there are two main concerns that immediately arise: * The gain of the controller depends on the knowledge of the upper bounds of perturbations that are unknown. * The time to reach the predefined neighborhood of the origin grows without bound with the initial condition. In this paper, we adjust the redesigned controller in (<ref>) by introducing adaptive gains to deal with the lack of knowledge of the perturbations' upper bound and incorporating time-varying feedback gains to reach the predefined neighborhood within a predefined reaching time (bounded finite time with a prescribed upper bound). Specifically, for any initial condition, this paper proposes a u(t) that drives the system's trajectories into a prescribed neighborhood of the origin in a predefined time, and after that, the control law ensures that the system's trajectories will never leave that neighborhood despite the presence of perturbations. § CONTROL DESIGN AND MAIN RESULT In this section, the control methodology is presented. The proposed approach consists of two phases: * First, a control strategy that drives every system's trajectory to a prescribed vicinity of the origin before a predefined time, in the presence of perturbations with an unknown upper bound. * A control gain that ensures the ultimate boundedness of the trajectories in the prescribed vicinity of the origin without requiring knowledge of the upper bound of the perturbations. It is important to note that in both phases, the same quadratic Lyapunov function is used. This Lyapunov function, obtained via the ARE, simplifies the switching between both phases and allows one to deal with the uncertain control coefficient. §.§ Predefined time reaching phase This subsection provides a predefined reaching phase controller based on a combination of PNF and adaptive gains for the redesigned controller in (<ref>). Consider the reaching phase control law as u(t) =-b(t)^-1[1/2( 1/γ+1) κ(t)^n e_n^TPΩ^-1(t)x(t) . +. Γ(t,x(t))sign(1/2( 1/γ+1) e_n^TPΩ^-1(t)x(t)) ] where 0<γ≤1+√(5)2, P is solution of the ARE (<ref>), Ω(t) = diag(1,κ(t), …, κ(t)^n-1) for κ(t) = 1/α(T-t) a continuous time varying gain and Γ(t,x) is the adaptive gain given by Γ̇(t,x) = | e_n^TP Ω^-1(t)x(t)|κ(t)^1-n. It is important to mention that V(t) = x^T(t)Px(t) is a Lyapunov function for the closed-loop of the PNF and the nominal system (i.e. u =γb(t)^-1κ(t)^n e_n^TPΩ^-1(t)x(t) and f(t) = 0 ). Then, following <cit.> one can add a robustifying term to compensate the perturbation f(t). Additionally, the knowledge of the upper bound of the uncertain control coefficient is not needed for the design of the nominal PNF. For the closed-loop (<ref>)-(<ref>), fix T>0 and ε>0. If Assumption <ref> is satisfied and α < λ_min(Q)/2(n-1)λ_max(P) , then there exist a first time t̅(x_0)<T such that the solution of the closed-loop system (<ref>)-(<ref>) reaches the boundary of the set 𝒮_ϵ/2:= { x∈ℝ^n | V(x)≤ε/2 }. During the so-called reaching phase, PNF type law increases the proportional feedback until the value allowing compensation of the uncertainties and disturbances, and, consequently, the system's trajectories converge to 𝒮_ϵ/2 in a time t̅<T uniformly for any initial condition and size of perturbations. Moreover, the control law is uniformly bounded as shown in Appendix <ref> Note that for PJ_n + J_n^TP -γ Pe_ne_n^TP = -Q, the ratio (<ref>) is maximized if Q = 𝕀; see <cit.>. §.§ Barrier function phase Once in a predefined neighborhood of the origin, this subsection provides a barrier function phase controller by choosing a positive semidefinite barrier function as gain in the robustifying part of the redesigned controller in (<ref>).Consider the control law u(t)=-b(t)^-1[1/2( 1/γ+1) e_n^TPx(t) . + . V(t)/ϵ-V(t)sign(1/2( 1/γ+1) e_n^TPx(t) ) ] where the function V(t) = x^T(t)Px(t) with P=P^T>0 solution of (<ref>). Suppose that Assumption <ref> is satisfied. Consider the control law (<ref>) in closed-loop with system (<ref>). For any given ϵ>0 such that V(x(t_0)) ≤ϵ/2, the trajectories of the closed loop system (<ref>)-(<ref>) satisfy V(x(t)) < ϵ for all t≥ t_0. With the system's trajectories being in the barrier width, the controller barrier function-based law ensures that the system's solution are confined into the set 𝒮_ϵ= { x∈ℝ^n | V(x)< ε} for all t>t_0 without the knowledge of the upper bound of the perturbation. Note that function V(t) = x^TPx is in fact a known Lyapunov function for a nominal system (with only u = -12( 1/γ+1) b(t)^-1e_n^TPx(t) and f(t)=0), then one can add a robustifying term to compensate f(t) ≠ 0 in the sense of <cit.>. Although the idea is similar in <cit.>, the dependency of the barrier function on a homogeneous function V for a homogeneous controller introduces a notable complexity. The ultimate bound is given in terms of a homogeneous norm, which can be difficult to prescribe and compute. For the linear framework given in this work, prescribing a desired vicinity of the origin results in a straightforward and easy-to-compute methodology. By the boundedness of V(t)< ϵ, there exist R>0 that depends on ϵ such that ‖ x ‖ < R. Then one can design ϵ such that one can obtain a desired vicinity of the origin. Considering that R(ϵ) = √(ϵ/λ_min(P)), for a desired R>0, choosing ϵ < R^2λ_min(P) will ensure the prescribed region. §.§ Main Result The proposed control approach can be summarized as follows: u(t) =-b(t)^-1[κ(t)^n u_0(t) + Λ(t,x(t))sign(u_0(t)) ], where u_0(t)= 1/2( 1/γ+1) e_n^TPΩ^-1(t)x(t), and the switching the function κ(t) κ(t) = 1/α(T-t) if t< T_1, 1 if t ≥T_1 . and the adaptive gain Λ(t,x) Λ(t,x) = Γ(t) , if t< T_1, V/ϵ - V , if t ≥T_1 . The time T_1>0 denotes the first moment such that x(T_1)∈𝒮_ϵ/2. Suppose that Assumption <ref> is satisfied for the closed-loop system (<ref>)-(<ref>) and let P be the solution of (<ref>). Given the predefined time T > 0 and a prescribed barrrier width ϵ >0, if α designed as (<ref>), then the trajectories of the system reach the set 𝒮_ϵ/2 in a time T_1 < T for all x_0 ∈ℝ^n, and will be confined in 𝒮_ϵ for all t≥ T_1. The proof of this result is made in two steps: A) First, it is proved that there exists a first moment 0<T_1=t̅(x_0)<T such that x(t)∈∂𝒮_ϵ/2 with 𝒮_ϵ/2={ x(t)∈ℝ^n | V(x(t))≤ε/2 } where T>0 is a priori given. B) Secondly, the ultimate boundedness of the trajectories in that region is achieved through the barrier function for all t > T_1, independently of the perturbation's upper bound. Items (A) and (B) are consequences of Lemmas <ref> and <ref>, respectively. □ Notice that the structure of (<ref>) is maintained for both stages and only the gains are switched. However, this only switching causes a discontinuity in the control signal. Therefore, the control law is essentially bounded. It is bounded during the reaching phase (see <ref>) and barrier function phase where Γ(t,x)=Vϵ-V≤σ_1ϵ-σ_1< ∞ (see proof of Lemma <ref>), except at t=t̅. § SIMULATION EXAMPLE Consider the Torsional spring damper system presented in <cit.>: j(t) θ̈(t) + b θ̇(t) + kθ = v(t) +φ(t) where j(t) is a time-varying inertia, v(t) is an input torque and φ(t) is an external disturbance. In this section, the tracking problem of a desired signal θ_d will be addressed. For that purpose, define the tracking errors: x_1 = θ(t) - θ_d(t) and x_2 = θ̇(t) - θ̇_d(t) and the nominal control input v(t)= u(t) +b(x_1 + θ_d) + k(x_2 + θ̇). Thus, the error dynamics has the following form: ẋ_1 = x_2 ẋ_2 = 1/j_m[ (1 + δ_j(t))u(t) ] + j(t)(θ̈_d(t) + φ(t) ) . where 1/j(t) = 1/j_m(1 + δ_j(t) ) , j_m is the nominal part of the inertia, δ_j is the uncertainty in the inertia. For the simulation, the desired trajectory θ_d(t) is designed as in <cit.>, considering a polynomial trajectory of degree 5 from t = 0 to t = 10. The desired θ̇(0) = 0 and θ_d(10) = 10, and θ̇_d(0)=θ̇_d(10)=θ̈_d(0) = θ̈_d(10)=0 . Simulation parameters are presented in Table <ref>. Three scenarios are tested in order to show the feasibility of the proposed approach: * In the first one, the barrier width is set to ϵ = 1. * Second one shows the results with ϵ = 0.01. * Finally, the value of the barrier is deacresed to ϵ = 1× 10^4. All simulations were made with δ_j(t) = 3/4sign(sin(t)) and the external disturbance φ(t) = cos(5t). The sampling step is set 1× 10^-3 using Euler integration method and the initial conditions were set to x(0) = [ 5 0 ]^T. For the control law in Theorem <ref>, matrix P is obtained as the solution of Equation (<ref>) with Q = 𝕀. The value of T used for the three scenarios is fixed, as it is presented in Table <ref>. §.§ First scenario In Figure <ref> the results of the control law with ϵ = 1 are presented. It can be seen that the tracking is near to the desired angle θ_d, but not exactly the same. As the upper bound of the settling time is designed as T = 2, the trajectory converges to a neighborhood of the tracked signal and remain nearby. §.§ Second scenario For ϵ = 0.01 in Figure <ref>, the angle θ is really close to the tracked desired signal. The control signal u(t) is following the negative of the perturbation. Moreover, the gain Λ(t,x) increases considerably with respect to the first scenario. This is a intuitive consequence of the vicinity being smaller. §.§ Third scenario Finally, Figure <ref> presents the tracking selecting ϵ = 1× 10^-4. This small neighborhood of the system's origin is achieved by means of a small barrier width. Nonetheless, a small choice on the barrier width may induce noise in the control signal (see Figure <ref>). This illustrates that, although ϵ can be as small as desired, the sampling step must taken into account in the choice of ϵ. § EXPERIMENTAL RESULT Consider the Furuta pendulum presented in <cit.> as: ( m_pL_2^2 + 1/4m_pL_p^2cos^2(θ_p) + J_r)θ̈_r - ( 1/2m_pL_pL_rcos(θ_p))θ̈_p + (1/2m_pL_p^2sin(θ_p)cos(θ_p))θ̇_rθ̇_p + (1/2m_pL_pL_rsin(θ_p))θ̇_p^2 = τ -1/2m_pL_pL_rcos(θ_p)θ̈_̈r̈ + (J_p + 1/4m_pL_p^2) θ̈_p - 1/4m_pL_p^2cos(θ_p)sin(θ_p) θ̇_r^2 -1/2m_pL_pgsin(θ_p) = 0 where θ_r and θ_p are the angles for the arm and the pendulum, respectively, the parameter m_p is the mass of the pendulum, L_p and L_r are the lengths of the pendulums and arm and J_r is the inertia of the arm. One can compute the torque-voltage (input to the motor V_m) conversion as: τ = η_gK_gη_mk_t(V_m - K_gk_mθ̇_r )/R_m . where η_g, K_g, k_t, R_m, η_g are motor parameters. For z_1=θ_r, z_2=θ_p, z_3=θ̇_r and z_4 =θ̇_p, linearising around the point z^T = [ 0 0 0 0 ] with z being a vector of the elements z_i, it yields to a form ż = Az + Bτ with matrices A = 1/J_T[ 0 0 1 0; 0 0 0 1; 0 1/4m_pL_p^2L_rg 0 0; 0 1/2m_pL_pg(J_r + m_pL_r^2) 0 0 ], B = 1/J_T[ 0; 0; J_p + 1/4m_pL_p^2; 1/2m_pL_pL_r ] and J_T = J_pm_pL_r^2 + J_rJ_p + 1/4J_rm_pL_p^2. One can then find a matrix transformation W = [ B AB A^2B A^3B ]H_k, where H_k is a Henkel matrix with the elements of the characteristic polinomial of A, such that the system is led to the controller form, then taking x = Wz, the system will be in the required form if τ = -τ_n x_3 + u, where τ_n = 0.1112 is a constant depending on the parameters of the plant. The parameters proposed for the experiment where is solution of (<ref>) with γ = 0.45 and Q=𝕀 and to satisfied condition (<ref>), α = 0.002. The integrator intial condition Γ(0) = 0 and the sampling step provide of 1 ms, the following two scenarios where tested in the Furuta pendulum system by Quanser Inc®. Note that the input of the pendulum system is saturated from u∈[-10,10] volts to the motor, with the relation given by (<ref>). §.§ For different upperbound of the settling time In this subsection, the initial condition is the same for both cases θ_p(0) = 0.5, setting ϵ = 10 and the upper bound of the settling time is selected T=0.5 and T= 0.2. It can be seen from Figure <ref> that both of the settling time bounds are maintained to the desired vicinity of the origin by means of the function V(t). §.§ For different prescribed neighborhood Additionally, initiating from θ_p(0) = 0.3 and setting T = 1, the next demonstration is the capability of the approach in order to prescribe different values for the ϵ-vicinity of the origin of V =0. It can be seen from Figure <ref> that for the ϵ = 0.5, the vicinity of the origin for the angle of the pendulum θ_p is smaller, near to 5× 10^-3 rad, than the angle for ϵ = 1, that is around 0.015 rad. As well as subsection V.A, from Figure <ref>, it can be seen that the states converge to the prescribed ϵ-vicinity of the origin of V(t) = 0 before the predefined time T = 1, and the moment of the switching is represented by the yellow line, where V = ϵ/2. The video of the experiments can be found in the following link <https://www.youtube.com/shorts/1mOcnIYYLLs>. Note also that from Figure <ref>, it can be seen that the transitory behavior from the experiment with ϵ = 0.5 in the sense of the Lyapunov function is considerably better than the one with ϵ =1, where the overshoot in V(t) is near 6. § CONCLUSION A Lyapunov redesign methodology is proposed to confine a trajectory of the system modeled by the perturbed chain of integrators in the prescribed vicinity of origin in a predefined time, even for the case when the upper bound of perturbation exists but is unknown. The efficacy of the proposed approach is illustrated through the simulations for the spring-mass model and experiments with the Furuta pendulum. § ACKNOWLEDGMENTS The authors are grateful for the financial support of Programa de Becas Posdoctorales DGAPA-UNAM, CONACyT (Consejo Nacional de Ciencia y Tecnología): Project 282013, and CVU 833748; PAPIIT-UNAM (Programa de Apoyo a Proyectos de Investigación e Innovación Tecnológica): Project IN106622. plain § TECHNICAL PROOFS §.§ Proof of Proposition <ref> For the closed-loop system (<ref>)-(<ref>) of the form ẋ(t) = J_n x(t) -1/2( 1/γ+1)(1+δ_b(t))e_ne_n^TPx(t)+e_nf(t) , consider the Lyapunov function candidate V(x) = x(t)^TPx(t), its time derivative along the trajectories of (<ref>) yields to V̇(x) = x(t)^T[ J_n^TP + PJ_n]x(t) + 2f(t)e^TPx(t) -( 1/γ+1)(1+δ_b(t))x^T(t)Pe_ne_n^TP x(t) Using the bound |δ_b(t) |≤ε_b <1 from Assumption 1 and 3, the following upper bound of V̇(x) is found: V̇(x) ≤ x(t)^T[PJ_n+J_n^TP-( 1/γ+1)( 1-ε_b) Pe_ne_n^TP]x(t) +2x(t)^TPe_nf(t) . Let us analyze when the following will happen: ( 1/γ+1)( 1-ε_b) = γ , ⇒ ε_b = 1- γ^2/1+γ where we can parameterize the bound ε_b by choosing γ in order to compensate the effect of the uncertain coefficient. Note that this bound will hold as long as ε_b<1. On the other hand, if there does not exist the effect of the perturbation, γ can be chosen as the solution of 1+γ-γ^2=0, which is γ = 1+√(5)/2. If that is the case, then for γ∈(0,1+√(5)/2], the following upper bound is fulfilled V̇(x) ≤ x(t)^T(PJ_n+J_n^TP- γ Pe_ne_n^TP)x(t) +2x(t)^TPe_nf(t) =-x(t)^TQx(t)+2x(t)^TPe_nf(t) where we used the equality in (<ref>). By using the upper bound of | f(t) |≤ M in Assumption <ref>, the following bound can be obtained by applying the Cauchy-Schwarz and Rayleigh-Ritz inequalities V̇(x)≤-λ_min(Q)‖ x(t) ‖^2+2M| x(t)^TPe_n | Notice that the second term in the right-hand side of the above inequality can be further amplified by using the Cholesky decomposition <cit.>. There exist R such that P=RR^T and | x(t)^TRR^Te_n|≤ x(t)^TR R^Te_n. From the fact that ‖ x(t)^TR ‖ = V(x)^1/2≤λ_max^1/2(P)‖ x(t) ‖ and ‖ R^T e_n ‖≤ V(e_n)^1/2≤λ_max^1/2(P)e_n=λ_max^1/2(P). It follows that, V̇(x)≤ -λ_min(Q)‖ x(t) ‖^2+2Mλ_max(P)‖ x(t) ‖ Hence, the foregoing inequality can be rewritten as V̇(x) ≤ -(1-θ)λ_min(Q)‖ x(t)‖^2 , ∀ ‖ x(t)‖≥μ where 0<θ<1 and μ=2Mλ_max(P)/θλ_min(Q). It follows from Theorem 4.18 in <cit.> that there is T^*:=T^*(μ,x_0) such that the solution of the closed-loop system (<ref>) is uniformly bounded, for all t≥ T^* and an initial state x(0) with ultimate bound given by (<ref>). □ §.§ Proof of Proposition <ref> For the closed-loop system (<ref>)-(<ref>) of the form ẋ(t)=J_nx(t)-1/2( 1/γ+1)(1+δ_b(t))e_ne_n^TPx(t) -e_nρ(t,x)(1+δ_b(t))sign(1/2( 1/γ+1) e_n^TPx(t)) +e_nf(t) consider the Lyapunov function candidate V(x)=x(t)^TPx(t). Using similar arguments as in the proof Proposition <ref>, the time derivative of V(x) along the trajectories of system (<ref>) accepts the following upper bound V̇(x)≤ -λ_min(Q) ‖ x(t)‖^2+2x^T(t)Pe_nf(t)-2x^T(t)Pe_n ×[ρ(t,x)(1+δ_b(t))sign(1/2( 1/γ+1) e_n^TPx(t))] by using the upper bounds in Assumption <ref> and choosing ρ(t,x)≥ M/(1-ε_b), the above expression can be rewritten as follows V̇(x)≤ -λ_min(Q) ‖ x(t) ‖^2-(1-ε_b)ρ(t,x) | 2e_n^TPx(t)| + M | 2e_n^TPx(t)| ≤ -λ_min(Q) ‖ x(t) ‖^2 ≤ - λ_min(Q)/λ_max(P)V(x) where we applied the Cauchy-Schwarz and Rayleigh-Ritz inequalities. The above inequality accepts the solution V(x)≤exp(-λ_min(Q)λ_max(P)t)V(x_0), equivalently, ‖ x(t) ‖≤√(λ_max(P)λ_min(P))exp(-λ_min(Q)2λ_max(P)t)‖ x_0 ‖. Hence, the time T̅^* required for a trajectory starting at x_0 to reach the value 0<μ^*<‖ x_0 ‖ is given by T̅^*=2λ_max(P)λ_min(Q)ln( √(λ_max(P)λ_min(P))‖ x_0 ‖μ^*). §.§ Proof of Lemma <ref> If x(t_0)∈𝒮_ϵ/2, set t̅(x_0)=0 and the proof is done. Assume that this is not the case, i.e., x(t_0)∈𝒮_ϵ/2^c. By using the time varying coordinate transformation in <cit.>, x=Ω(t)y, Ω(t)=diag(1,κ(t),κ(t)^2,…,κ(t)^n-1) where κ(t)=1/α(T-t), α∈ (0,1), the perturbed chain of integrators (<ref>) yields to ẏ_i =α(1-i)κ(t)y_i+κ(t)y_i+1, 1≤ i ≤ n -1 ẏ_n =α(1-n)κ(t)y_n+κ(t)^1-n[b(t)(1+δ_b(t))u(t)+f(t) ] Now take the time scaling t(τ)=T(1-e^-ατ), by using the chain rule d/dty_idt/dτ, where dt/dτ=α Te^-ατ=:κ(τ)^-1, one has y^'_i =ẏ_iκ(τ)^-1=-α(i-1)y_i+y_i+1, 1≤ i ≤ n y^'_n =-α(n-1)y_n+κ(τ)^-n[ b(τ)(1+δ_b(τ)) u(τ)+f(τ)] or equivalently in compact form y^'(τ)=Ay(τ)+e_nκ(τ)^-n[b(τ)(1+δ_b(τ))u(τ)+f(τ) ] where A=J_n+α D_α, and D_α=diag(0, -1, …,-(n-2),-(n-1)). After applying the time scaling, the first part of the control law in (<ref>) is given by u(τ)= - 1/2( 1/γ+1) b(τ)^-1 e_n^TPy(τ)κ(τ)^n -Γ(τ)sign(1/2( 1/γ+1) e_n^TPy(τ)) with the adaptive gain Γ'(τ) = | e_n^TPy(τ) |κ(τ)^-n , the closed-loop system has the form: y'(τ)=Ay(τ)-1/2( 1/γ+1) (1+δ_b(τ))e_ne_n^TPy(τ) -κ(τ)^-n( Γ(τ)(1+δ_b(τ))sign(γ̃ e_n^TPy(τ))-f(τ) )e_n where γ̃ = 1/2( 1/γ+1). Consider the Lyapunov V̅(τ)=V_1(τ)+V_2(τ) where V_1(τ)=y(τ)Py(τ), V_2(τ)=(1-ε_b)(Γ(τ)-Γ^*)^2 and Γ^*=M/(1-ε_b). * For V_1(τ), its time derivative along the trajectories of (<ref>) yields to V_1'(τ) = y^T(τ)Py'(τ) + (y(τ)')^TPy(τ) ≤ y^T(τ)(PA+A^TP )y(τ)-2κ(τ)^-n[ Γ(τ)(1-ε_b). .-M]| e_n^TPy(τ)|-( 1/γ+1) (1+δ_b(τ))y(τ)^TPe_ne_n^TPy(τ) where we used the bounds for the perturbation terms in Assumption <ref>. Similarly than in the proof of Proposition <ref>, the following upper bound is satisfied V_1'(τ) ≤ y(τ)^T(PA+A^TP-γ Pe_ne_n^TP)y(τ) -2κ(τ)^-n(1-ε_b)(Γ(τ)-Γ^*)| e_n^TPy(τ)| . Finally, since 1-δ_b(τ)^2>0 by Assumption <ref> and taking the solution of (<ref>) V_1'(τ) ≤- λ_min(Q)‖ y(τ)‖^2 +2α y^TD_αPy -2κ(τ)^-n(1-ε_b)(Γ(τ)-Γ^*)| e_n^TPy(τ)| ≤ -λ_min(Q)‖ y(τ) ‖^2 + 2α(n-1)λ_max(P)‖ y(τ) ‖^2 -2κ(τ)^-n(1-ε_b)(Γ(τ)-Γ^*)| e_n^TPy(τ)| and therefore, if α is selected as (<ref>), then there exist α̅>0 such that V_1'(τ) ≤ -α̅‖ y(τ) ‖^2 -2κ(τ)^-n(1-ε_b)(Γ(τ)-Γ^*)| e_n^TPy(τ)| * For V_2(τ), its time derivative along the solution of (<ref>) yields to V_2'(τ) ≤ 2( 1-ε_b)Γ'(τ)[ Γ(τ) - Γ^*] =2κ(τ)^-n(1-ε_b)(Γ(τ)-Γ^*)| e_n^TPy(τ)| , From the upper bounds in (<ref>) and (<ref>), then V̅≤-α̅‖ y‖^2 =-W(y(τ)) ≤ 0. This implies that V̅(τ)≤V̅(0), hence y(τ) and Γ(τ) are bounded. Noticing that ∫_0^∞W(y(s))ds≤V̅_0-lim_τ→∞V̅(τ)<∞. On the other hand, from the continuity of W(y(τ)), and uniform continuity (this follows by noticing that the time derivative of y(τ) is bounded, using Assumption <ref>, each term in the time derivative (<ref>) is bounded) and boundedness of y(τ), then W(y(τ) is uniformly continuous. By using Barbalat's Lemma <cit.>, then W(y(τ))→ 0 as τ→∞, this implies that y(τ) reaches the set {‖ (y(τ))‖=0 } as τ grows unbounded. Hence, by continuity, there exists a time τ̅(y(0))<∞ such that the solution reaches any neighborhood of the zero. Equivalently, after using the inverse time scale transformation τ(t)=-1αln(1-tT), the solution x(t)=Ω(t)^-1y(t) reaches any neighborhood of zero in a time t̅(x_0)=lim_τ→τ̅T_c(1-e^-τ)<T. The proof is complete after setting the time t̅ as the time when x(t) reaches the boundary of the set 𝒮_ϵ/2. §.§ Boundedness of control law during reaching phase One can proceed to prove that the control law (<ref>) is bounded by noticing that it accepts the following upper bound | u(t) |≤b^-1[κ(t)^n| u_0(t) | + Γ(t)] Then, we proceed to prove that each summand on the right hand side in the above expression is bounded in the following two steps. Step 1. For the first summand, consider the closed-loop system in the scaled-time (<ref>), and the Lyapunov-like function V_1(τ)=y^T(τ)Py(τ). The time derivative of V_1(τ) along the trajectories of (<ref>) satisfies the upper bound (<ref>), which can be further overestimated as follows V_1^'≤ -α̅‖ y(τ) ‖^2-2κ(τ)^-n(1-ε_b)(Γ(τ)-Γ^*)| e_n^TPy(τ)| Since every symmetric matrix has a unique Cholesky decomposition <cit.>, i.e., P=RR^T>0, with R a lower triangular matrix with real and positive diagonal entries, hence | y^T(τ)Pe_n | = | y^T(τ)RR^Te_n |=|y̅^T(τ) B| where the vectors y̅(τ)=R^Ty(τ) and B=R^Te_n satisfy |y̅(τ)|=y^T(τ)Py(τ)^1/2 and | B |=(e_n^T P e_n)^1/2, respectively. By using the Cauchy-Schwarz inequality, the term | e_n^TPy(τ)| accepts the following upper bound | e_n^TPy(τ)|≤|y̅(τ) || B |≤| B | V^1/2(τ) Hence, using the above expression and Rayleigh-Ritz inequality, V_1^'(τ) ≤ -α̅‖ y(τ) ‖^2 + 2κ(τ)^-n(1-ε_b)Γ̅| e_n^TPy(τ)| ≤ - c_0V_1(τ) +2c_1exp(-c_2τ) V_1^1/2(τ) where c_0=α̅/λ_max(P), c_1=α^n T^n(1-ε_b) Γ̅| B|, and c_2=nα. The existence of Γ̅>0 follows from Γ(τ) being bounded, see the arguments in proof of Lemma <ref>. Following the same arguments used in the proof of Claim 11 in <cit.> it can be concluded that there exist c_5>0 such that ‖ y(τ)‖≤ c_5. Since κ(τ)^n≤κ(τ̅)^n<∞ is bounded, then the product b^-1κ(τ)^n| u_0(τ) |≤b^-1κ(τ̅)^n‖ e_n‖‖ P ‖‖ y(τ) ‖ is also bounded. Since the time scalings are invertible, for each τ∈ [0, τ̅] the same bound for y(τ) holds for x(t) in the interval t∈ [0,t̅]. Thus, one can find c_κ>0 such that b^-1κ(t)^n| u_0(t) |≤b^-1c_κ on t∈ [0,t̅]. Step 2. The boundedness of the second summand follows from the Proof of Lemma <ref>, it is shown that there exists a constant c_6>0 such that |Γ(t)-Γ^*| <c_6, thus the following bound holds |Γ(t)|≤|Γ(t) -Γ^*| + |Γ^*| <c_6 + Γ^*. The second summand can be bounded by b^-1Λ(t,x(t))≤b^-1(c_6 + Γ^*)<∞ on t∈ [0,t̅]. From Step 1 and Step 2, one can conclude that the control law accepts the upper bound | u(t) |≤b^-1(c_κ+c_6 + Γ^*)<∞ on t∈ [0,t̅]. §.§ Proof of Lemma <ref> Consider the Lyapunov function candidate V along the trajectories of the system ẋ = J_n x + e_nf(t) - e_n (1+δ_b(t))(γ̃ e_n^TPx+ k(V)sign( γ̃e_n^TPx)) , where γ̃ = 12( 1γ+1 ), yields V̇ = 2x^TP[J_n x + f(t) ] -2x^TP e_n [ (1+δ_b(t))(γ̃ e_n^TPx + k(V)sign(γ̃ e_n^TPx))] Similarly that in the proof of Proposition <ref>, one can prove that the time derivative of V along the trajectories of ẋ = J_n x - e_nγ̃ (1+δ_b(t)) e_n^TPx satisfies V̇≤ -λ_min(Q)‖ x(t)‖^2 with P solution of (<ref>). Defining Φ = M / 1 - ε_b and the barrier function k(V)= V/ϵ - V, the derivative of V has the form V̇≤ -λ_min(Q)‖ x(t) ‖^2 - 2(1 - ε_b) | x^TPe_n |[ V/ϵ - V - Φ] . Since ϵ-V >0 as V tends to ϵ, then | V/ϵ - V - Φ| < R_V:= 1 + Φ, then it is implied by (<ref>) that V̇≤ -β_1 V(x) + β_2V^1/2 with β_1 = λ_min(Q)/λ_max(P) and β_2 = 2[(1-ε_b)λ_max(P)R_V]/√(λ_min(P)), which implies that from the first time t_0, the trajectories of the system does not escape of the set 𝒮_ϵ/2 in a finite time. Let us define the variable σ_1 := Φ/ 1 + Φϵ from we will prove that ϵ > V(x) > σ_1. From the definition of function k(V), it follows that there exist k(V) > k(σ) = Φ, and it is implied that V̇≤ -‖ x(t)‖^2 - (1-ε_b)ξ|2 x(t)^TPe_n |≤ -‖ x(t)‖^2 where ξ = k(V) - Φ > 0. Then, the trajectories of the closed-loop system converge to the level set V≤σ_1. This means that for all t ≥ t_0 the inequality V ≤σ_1 is satisfied. By construction of (<ref>), it follows that V < ϵ for all t ≥ t_0, this completes the proof.
http://arxiv.org/abs/2405.08750v1
20240514163345
Lepton-Pair Scattering With an Off-Shell and an On-Shell Photon at Two Loops in Massless QED
[ "Simone Zoia" ]
hep-ph
[ "hep-ph", "hep-th" ]
unsrt #1#2#3#4#1 #2, #3 (#4) Nuovo Cimento Nucl. Instrum. Methods Nucl. Instrum. Methods A Nucl. Phys. B Phys. Lett. B Phys. Rev. Lett. Phys. Rev. D Z. Phys. C ϵ^' ε → π^+π^-γ p K^0 K̅^̅0̅ α α̅ CP-1.80em/ CERN, Theoretical Physics Department, CH-1211 Geneva 23, SwitzerlandLepton-Pair Scattering With an Off-Shell and an On-Shell Photon at Two Loops in Massless QED Simone Zoia May 20, 2024 ============================================================================================ I present the computation of the two-loop amplitudes for the scattering of a lepton pair with an off-shell and an on-shell photon in massless QED.<cit.> We apply modern techniques developed to tackle QCD amplitudes with many scales: we express the Feynman integrals in terms of a basis of special functions, and reconstruct the amplitudes from numerical finite-field evaluations. Our results complete the amplitude-level ingredients for the N3LO predictions of electron-muon scattering needed to meet the precision target of the future MUonE experiment. CERN-TH-2024-059 § MOTIVATION: THE MUON ANOMALOUS MAGNETIC MOMENT The muon anomalous magnetic moment a_μ is at the heart of one of the longest-standing tensions among experimental,<cit.> Standard Model (SM) data-driven,<cit.> and lattice QCD <cit.> results. The main source of error in SM prediction is the Hadronic Vacuum Polarisation (HVP) contribution, a^ HVP_μ. It is therefore crucial to obtain independent determinations of a^ HVP_μ. The planned MUonE experiment <cit.> aims to measure the hadronic running of the electromagnetic coupling using elastic electron-muon scattering (eμ→ eμ), which will enable a new and precise determination of a_μ^ HVP.<cit.> Full NNLO QED predictions for eμ→ eμ have been completed recently,<cit.> highlighting the need for N3LO corrections to achieve MUonE's precision goal. The dominant contribution comes from the electron-line corrections, i.e., corrections to the sub-process with the muon line stripped off (e→ e γ^*, see Fig. <ref>). The main missing ingredient is the RVV (real double-virtual) matrix element (e→ e γγ^* at two loops).<cit.> The two-loop amplitudes for e→ e γγ^* play a role also to compute a^ HVP_μ from the e^+ e^- →γ^* → hadrons data. In the energy scan measurements, these amplitudes are part of the RVV corrections at N3LO. In the radiative return measurements, instead, we measure a photon in addition to the hadrons, and thus they contribute to the VV corrections at NNLO. In these cases, however, the bottleneck is in the hadronic decay. The two-loop amplitudes for e→ e γγ^* in massless QED are fairly simple by today's state of the art, and could be extracted from known results.<cit.> We decided to perform a direct computation making use of the modern analytic techniques developed to tackle processes with many scales in QCD (see S. Badger's talk about tt̅ + jet production<cit.>). This allowed us to obtain compact analytic expressions which can be evaluated numerically efficiently, and gave us an opportunity to spread this technology to the community working on QED/EW amplitudes. As a proof of the relevance of this work, another direct computation followed shortly after ours.<cit.> § TACKLING ALGEBRAIC AND ANALYTIC COMPLEXITY IN THE SCATTERING AMPLITUDES We compute the two-loop helicity amplitudes A^(2) for the process 0 →ℓ(p_1) + ℓ̅(p_2) + γ(p_3) + γ^*(p_4) , where ℓ is an on-shell massless lepton and γ (γ^*) an on-shell (off-shell) photon. The external momenta p_i satisfy the on-shell conditions p_i^2 = 0 for i=1,2,3. We choose the independent invariants as s⃗ = ( s_12, s_23, s_4 ), with s_i… j = (p_i+…+p_j)^2. We use dimensional regularisation with D=4-2ϵ spacetime dimensions. The starting point are the Feynman diagrams, which we generate with  <cit.> (see Fig. <ref> for an example). We then manipulate them to write each helicity amplitude as a linear combination of scalar Feynman integrals of the families shown in Fig. <ref>.[The other needed two-loop families are expressed in terms of those in Fig. <ref> and products of one-loop integrals.] Their coefficients are rational functions of s⃗ and ϵ. By solving the integration-by-parts (IBP) relations,<cit.> we express the scalar integrals in terms of fewer, linearly independent `master' integrals (MIs). We generate the IBP relations using ,<cit.> and solve them with .<cit.> Finally, we expand around ϵ=0 up to order ϵ^0, obtaining A^(2)(s⃗; ϵ) = ∑_k=-4^0 ∑_i ϵ^k r_ki(s⃗) F_i(s⃗) , where r_ki (F_i) are rational (special) functions. This separation reflects the two kinds of complexity which plague this type of analytic computation: the algebraic and the analytic complexity. While the rational functions are simple from the analytic point of view, their sheer size — the algebraic complexity — can make them difficult to handle. This is particularly problematic in the intermediate stages: while the input (the Feynman diagrams) and the final result (the amplitude) are comparatively compact, the intermediate expressions may swell to the point of jeopardising the computation. Two simple yet important insights allow us to sidestep this bottleneck. First, we are not interested in knowing the intermediate results analytically. Second, the expression swell affects only the symbolic computations, and can be deflated by instead evaluating the rational functions numerically. Leveraging these ideas, Peraro <cit.> pioneered a new approach, which replaces the symbolic manipulations of rational functions with numerical evaluations over finite fields, i.e., integers modulo a prime number. Modular arithmetic allows us to avoid both the intrinsic loss of accuracy of floating-point numbers, and the computationally expensive arbitrary-precision arithmetic of exact rational numbers. The analytic expression of the final result is then obtained from sufficiently many numerical evaluations by means of functional reconstruction algorithms. We perform all the rational operations on rational functions numerically over finite fields, and only reconstruct the rational coefficients of the final result in Eq. (<ref>), thus sidestepping the intermediate expression swell. The entire workflow is implemented within FiniteFlow, adopting a number of optimisations to simplify the functional reconstruction.<cit.> The special functions arise from the loop integrations, and are instead characterised by their analytic complexity. Here, the difficulty is also conceptual other than computational. Even just understanding which class of special functions appears in a given amplitude may be challenging, let alone evaluating and manipulating them. Moreover, special functions satisfy functional relations. A toy example is log (x y) = log x + log y (x,y>0) for the logarithm. A representation of an amplitude in terms of special functions may thus be redundant. This leads to a more complicated expression and to an unstable evaluation, as the cancellations only occur numerically. In the most complicated cases, we lack the mathematical technology to overcome these issues. In the best understood cases, the most successful approach is the method of differential equations (DEs) <cit.> in the canonical form.<cit.> The idea is to choose the MIs g⃗ such that they satisfy a system of partial DEs of the form dg⃗(s⃗; ϵ) = ϵ ∑_i a_i dlog(W_i(s⃗)) ·g⃗(s⃗;ϵ) , where the a_i's are constant rational matrices, and the lettersW_i(s⃗) are algebraic functions of the invariants s⃗ (e.g., s_12, s_23, s_12+s_23 etc.). Finding such MIs is in general difficult, but the integral families in Fig. <ref> are simple by today's standards, and indeed they had been computed already long ago.<cit.> We instead focus on how to solve Eq. (<ref>) so as to overcome the issues above. Also in this case, we follow an approach which has proven successful in the context of multi-scale QCD amplitudes.<cit.> By making use of a mathematical technique called Chen iterated integral, we construct a basis of algebraically independent and irreducible special functions in which all MIs (including all their required crossings) can be expressed. This guarantees that we have a unique and compact representation of the amplitudes, and that we evaluate the smallest possible number of special functions. In order to evaluate them numerically, we express the basis functions in terms of multiple polylogarithms (MPLs). We then apply a mathematical technique called Lyndon decomposition to further optimise the expressions. In summary, a high degree of optimisation is unlocked by a deep mathematical understanding of the relevant special functions. § CONCLUSIONS AND OUTLOOK We computed the two-loop amplitudes for 0→ℓℓ̅γγ^* in massless QED in terms of a basis of MPLs that are suitable for efficient evaluation. We used finite-field techniques to sidestep the intermediate expression swell. Our results are ready for deployment in phenomenology, and indeed have already been implemented in  <cit.> to provide the RVV (electron-line) corrections to e μ→ e μ. These results pave the way to N3LO predictions for the future MUonE experiment. § ACKNOWLEDGMENTS I am grateful to Simon Badger, Jakub Kryś and Ryan Moodie for collaboration on this project. This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 772099), and Horizon research and innovation programme (grant agreement No. 101105486). § REFERENCES 99Badger:2023xtl S. Badger, J. Kryś, R. Moodie and S. Zoia. Lepton-pair scattering with an off-shell and an on-shell photon at two loops in massless QED. JHEP, 11:041, 2023. Muong-2:2021ojo B. Abi et al. Measurement of the Positive Muon Anomalous Magnetic Moment to 0.46 ppm. Phys. Rev. Lett., 126(14):141801, 2021. Aoyama:2020ynm T. Aoyama et al. The anomalous magnetic moment of the muon in the Standard Model. Phys. Rept., 887:1–166, 2020. Borsanyi:2020mff Sz. Borsanyi et al. Leading hadronic contribution to the muon magnetic moment from lattice QCD. Nature, 593(7857):51–55, 2021. Abbiendi:2016xup G. Abbiendi et al. Measuring the leading hadronic contribution to the muon g-2 via μ e scattering. Eur. Phys. J. C, 77(3):139, 2017. CarloniCalame:2015obs C. Carloni Calame, M. Passera, L. Trentadue and G. Venanzoni.A new approach to evaluate the leading hadronic corrections to the muon g-2. Phys. Lett. B, 746:325–329, 2015. Broggio:2022htr A. Broggio et al. Muon-electron scattering at NNLO. JHEP, 01:112, 2023. Fael:2023zqr M. Fael, F. Lange, K. Schönwald and M. Steinhauser. Massive three-loop form factors: Anomaly contribution. Phys. Rev. D, 107(9):094017, 2023. Garland:2001tf L. W. Garland, T. Gehrmann, E. W. Nigel Glover, A. Koukoutsakis and E. Remiddi. The Two loop QCD matrix element for e^+ e^- → 3 jets. Nucl. Phys. B, 627:107–188, 2002. Badger:2024fgb S. Badger, M. Becchetti, N. Giraudo and S. Zoia. Two-loop integrals for t t̅ +jet production at hadron colliders in the leading colour approximation. 4 2024. Fadin:2023phc V. S. Fadin and R. N. Lee. Two-loop radiative corrections to e^+ e^- →γγ^* cross section. JHEP, 11:148, 2023. Nogueira:1991ex P. Nogueira.Automatic Feynman graph generation. J. Comput. Phys., 105:279–289, 1993. Chetyrkin:1981qh K. G. Chetyrkin and F. V. Tkachov. Integration by Parts: The Algorithm to Calculate beta Functions in 4 Loops. Nucl. Phys., B192:159–204, 1981. Laporta:2001dd S. Laporta. High precision calculation of multiloop Feynman integrals by difference equations. Int. J. Mod. Phys., A15:5087–5159, 2000. Lee:2012cn R. N. Lee. Presenting LiteRed: a tool for the Loop InTEgrals REDuction. 12 2012. Peraro:2019svx T. Peraro.FiniteFlow: multivariate functional reconstruction using finite fields and dataflow graphs. JHEP, 07:031, 2019. Peraro:2016wsq T. Peraro. Scattering amplitudes over finite fields and multivariate functional reconstruction. JHEP, 12:030, 2016. Zoia:2023nup S. Zoia. Two-loop five-particle scattering amplitudes. PoS, RADCOR2023:032, 2024. KOTIKOV1991158 A.V. Kotikov. Differential equations method. new technique for massive feynman diagram calculation. Physics Letters B, 254(1):158–164, 1991. Gehrmann:1999as T. Gehrmann and E. Remiddi. Differential equations for two loop four point functions. Nucl. Phys. B, 580:485–518, 2000. Bern:1993kr Z. Bern, L. J. Dixon and D. A. Kosower. Dimensionally regulated pentagon integrals. Nucl. Phys. B, 412:751–816, 1994. Henn:2013pwa J. M. Henn. Multiloop integrals in dimensional regularization made simple. Phys. Rev. Lett., 110:251601, 2013. gehrmann:2000zt T. Gehrmann and E. Remiddi. Two loop master integrals for γ^* → 3 jets: The Planar topologies. Nucl. Phys. B, 601:248–286, 2001. gehrmann:2001ck T. Gehrmann and E. Remiddi. Two loop master integrals for γ^* → 3 jets: The Nonplanar topologies. Nucl. Phys. B, 601:287–317, 2001. Chicherin:2020oor D. Chicherin and V. Sotnikov. Pentagon Functions for Scattering of Five Massless Particles. JHEP, 20:167, 2020. Chicherin:2021dyp D. Chicherin, V. Sotnikov and S. Zoia. Pentagon functions for one-mass planar scattering amplitudes. JHEP, 01:096, 2022. Abreu:2023rco S. Abreu, D. Chicherin, H. Ita, B. Page, V. Sotnikov, W. Tschernow and S. Zoia.All Two-Loop Feynman Integrals for Five-Point One-Mass Scattering. Phys. Rev. Lett., 132:141601, 2024. Banerjee:2020rww P. Banerjee, T. Engel, A. Signer and Y. Ulrich. QED at NNLO with McMule. SciPost Phys., 9:027, 2020.
http://arxiv.org/abs/2405.08730v1
20240514161629
A Generalized Difference-in-Differences Estimator for Unbiased Estimation of Desired Estimands from Staggered Adoption and Stepped-Wedge Settings
[ "Lee Kennedy-Shaffer" ]
stat.ME
[ "stat.ME", "stat.AP", "62" ]
#1 1 1 A Generalized Difference-in-Differences Estimator for Unbiased Estimation of Desired Estimands from Staggered Adoption and Stepped-Wedge Settings Lee Kennedy-Shaffer Department of Mathematics and Statistics, Vassar College May 20, 2024 ================================================================================================================================================= 0 A Generalized Difference-in-Differences Estimator for Unbiased Estimation of Desired Estimands from Staggered Adoption and Stepped-Wedge Settings Staggered treatment adoption arises in the evaluation of policy impact and implementation in a variety of settings. This occurs in both randomized stepped-wedge trials and non-randomized quasi-experimental designs using causal inference methods based on difference-in-differences analysis. In both settings, it is crucial to carefully consider the target estimand and possible treatment effect heterogeneities in order to estimate the effect without bias and in an interpretable fashion. This paper proposes a novel non-parametric approach to this estimation for either setting. By constructing an estimator using two-by-two difference-in-difference comparisons as building blocks with arbitrary weights, the investigator can select weights to target the desired estimand in an unbiased manner under assumed treatment effect homogeneity, and minimize the variance under an assumed working covariance structure. This provides desirable bias properties with a relatively small sacrifice in variance and power by using the comparisons efficiently. The method is demonstrated on toy examples to show the process, as well as in the re-analysis of a stepped wedge trial on the impact of novel tuberculosis diagnostic tools. A full algorithm with R code is provided to implement this method. The proposed method allows for high flexibility and clear targeting of desired effects, providing one solution to the bias-variance-generalizability tradeoff. Keywords: Bias-variance tradeoff, causal inference, cluster-randomized trials, non-parametric estimation, permutation inference, quasi-experiments 1.9 § INTRODUCTION Staggered treatment adoption occurs in a wide variety of settings, including both observational and randomized contexts. In observational and quasi-experimental studies, panel data methods are commonly used to analyze the effect of a policy implementation or an exogenous shock. Stepped-wedge cluster-randomized trials have also become a common approach for the analysis of health, education, or other social policies, especially with a phased or gradual implementation. Across settings, the analysis of the data generated from staggered treatment settings requires careful consideration of the desired estimand, assumptions about the treatment effect, consideration of heterogeneity across units, time periods, and treatment regimens, and appropriate consideration of variance and correlation. This complexity has led to the development of a wide array of methods, commonly found in the econometrics literature surrounding panel data, difference-in-differences (DID), and staggered treatment adoption and in the biostatistics literature surrounding stepped-wedge trials (SWTs). Key developments in SWTs include approaches to interpret the targeted estimand (see, e.g., <cit.>), design-based considerations (see, e.g., <cit.> and <cit.>), robust inference (see, e.g., <cit.>, <cit.>, <cit.>), and a variety of analytic approaches, discussed below. While development in both areas is still very much ongoing, recent reviews of the staggered adoption literature can be found in, among others, <cit.>, <cit.>, <cit.>, and <cit.>. What these new developments generally share is an acknowledgment of the need for careful selection of the target estimand and modeling of the treatment effect in order to estimate that estimand without bias. In the stepped-wedge setting, this is often done using a parametric or semi-parametric model for the treatment effect that can account for heterogeneity as in <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. It can, however, also be done using appropriate weighting of non-parametric estimators like those proposed by <cit.> and <cit.>. In the staggered treatment adoption setting, approaches to address the bias that can arise in two-way fixed-effects models (see, e.g., <cit.> and <cit.>) have included adjusting the interpretation based on the identifying assumptions as in <cit.>, specifying “dynamic” (time-varying) treatment effects as in <cit.>, and restricting or combining with specified weights effect estimators targeting specific group-time effects as in <cit.> and <cit.>. <cit.> highlight the relationship between these approaches, noting a similarity between the approaches of weighting specific effects in the DID and SWT settings. This paper introduces a generalized estimator that can target a variety of estimands specified by investigators under different sets of assumptions about treatment effect homogeneity. This class of estimators is constructed by taking weighted averages of simple two-by-two DID estimators, and finding the minimum-variance such weighting that targets the desired estimand. This allows one approach to be used across different assumption settings and different target estimands; it further allows different a priori variance assumptions to be considered and, if the assumption is correct, minimizes the corresponding variance of the estimator. This approach also eases interpretability by identifying the weights on the estimators as well as the weights on the individual observations and permits sensitivity analysis by assessing the estimand and relative efficiency under different settings. Section <ref> of this paper describes the approach to constructing the estimator and its properties, with the full algorithm in Section <ref>. Section <ref> provides a toy example to illuminate the algorithm in a few simple settings. Section <ref> re-analyzes data from a SWT by <cit.> using this method. Section <ref> comments on the advantages and limitations of the method, as well as future areas for research. § METHODS §.§ Notation Consider a setting with J periods (j=1,…,J) and N units of analysis (i=1,…,N), which may be clusters or individual study units, for a total of NJ observations. Denote by Y_ij the outcome (or average outcome) in unit i in period j and by X_ij the indicator of whether unit i was treated/exposed in period j (these are used interchangeably without, with a preference for “treated” for simplicity). Let T_i = min{j: X_ij = 1} be the time period in which unit i was first treated. The staggered adoption assumption is made that once treated, a unit remains treated for the duration of the study. Note that <cit.> consider more general settings with identifying assumptions that do not require staggered adoption; generalizations of the following methods to those settings may be feasible but are not considered here. Note also that there may be multiple units in the same sequence, referring to a pattern of treatment (under the staggered adoption assumption, this is fully specified by T_i). Without loss of generality, the unit indices are ordered from the earliest treatment adopter to the latest. That is, if i < i', then T_i ≤ T_i'. Units with the same adoption time may be ordered in any way, as long as the ordering remains consistent throughout the notation. §.§ Two-by-two DID estimators For every pair of units i ≠ i' and pair of periods j ≠ j', there is a two-by-two DID estimator: D_i,i',j,j' = ( Y_ij' - Y_ij) - ( Y_i' j' - Y_i' j) Without loss of generality, consider only such estimators where i < i' and j < j'. Note that swapping the order of either pair of indices multiplies the estimator by -1. There are a total of N2J2 = N(N-1)J(J-1)/2 such estimators. These two-by-two DID estimators can be partitioned into six mutually exclusive categories based on the treatment pattern of both units. Under the staggered treatment assumption, since j < j', each cluster can either be untreated in both periods, treated in both periods, or untreated in period j and treated in period j'. Moreover, since i < i', T_i ≤ T_i' so if unit i is untreated in any period, unit i' must be as well. This leaves six possible combinations of treatment patterns, summarized in Table <ref>, along with the number in each category. §.§ Expected value of two-by-two DID estimators This paper focuses on additive treatment effects, and define the effect of treatment in period j on unit i, which first adopted treatment in period T_i, by θ_i,T_i,j: θ_i,T_i,j = E[ Y_ij(1) ] - E[Y_ij(0)], where Y_ij(1) is the potential outcome for unit i in period j if the unit is first treated in period T_i ≤ j and Y_ij(0) is the potential outcome for unit i in period j if the unit is first treated after period j or never treated. Note that multiplicative treatment effects can be considered by using a log transformation on the outcome. If the assumptions hold on the multiplicative scale, the estimator can then be computed using the log-transformed outcomes as the Y_ij values. This has been discussed in staggered adoption settings (see, e.g., <cit.> and <cit.>) and in SWT settings (see, e.g., <cit.> and <cit.>). To ensure consistency, the following no-spillover and no-anticipation assumptions are also needed; versions of these are common in both the staggered adoption and SWT literatures. For all i,j, Y_ij(1) and Y_ij(0) are independent of X_i'j' for any i' ≠ i and any j'. For all j < T_i, Y_ij = Y_ij(0). From the definitions, then, the expectation of each two-by-two DID estimator can be found as follows. Under Assumptions <ref> and <ref>, for all i,i',j,j': E [ D_i,i',j,j'] = (E[Y_ij'(0)] - E[Y_ij(0)] ) - (E[Y_i'j'(0)] - E[Y_i'j(0)] ) + ( θ_i,T_i,j' X_ij' - θ_i,T_i,j X_ij) - ( θ_i',T_i',j' X_i'j' - θ_i',T_i',j X_i'j). This expectation can be simplified under a further parallel trends assumption. For any i ≠ i' and j < j', E[Y_ij'(0)] - E[Y_ij(0)] = E[Y_i'j'(0)] - E[Y_i'j(0)]. Note that this aligns with one version of the parallel (or common) trends assumption used in the staggered adoption literature (see, e.g., <cit.>, p. 435), but other forms and statements of it exist as well. This can be implied by randomization of the treatment adoption sequences, as in the SWT case, as that would imply exchangeability of outcomes which necessarily implies parallel trends. Under Assumptions <ref>–<ref>, for all i,i',j,j': E [ D_i,i',j,j'] = ( θ_i,T_i,j' X_ij' - θ_i,T_i,j X_ij) - ( θ_i',T_i',j' X_i'j' - θ_i',T_i',j X_i'j). Further simplification of this expectation depends on assumptions about the heterogeneity of the treatment effects by unit, time-on-treatment (i.e., exposure time), and period (i.e., calendar time). Many assumption settings are feasible; this paper considers the following five and defines a simplified treatment effect notation for each. No additional assumptions: the treatment effect may vary by unit, exposure time, and/or calendar time. The treatment effect is denoted by θ^(1)_i,j,a in period j for unit i in its ath treated period. That is, θ_i,T_i,j = θ^(1)_i,j,j-T_i+1 for all i,j where X_ij = 1. Unit homogeneity: the treatment effect may vary by exposure time and/or calendar time, but does not vary by units in the same treatment adoption sequence. The treatment effect is denoted by θ^(2)_j,a in period j for a unit in its ath treated period. That is, θ_i,T_i,j = θ^(2)_j,j-T_i+1 for all i,j where X_ij = 1. Unit and calendar-time homogeneity: the treatment effect may vary only by exposure time. The treatment effect is denoted by θ^(3)_a for a unit in its ath treated period. That is, θ_i,T_i,j = θ^(3)_j-T_i+1 for all i,j where X_ij = 1. Unit and exposure-time homogeneity: the treatment effect may vary only by calendar time. The treatment effect is denoted by θ^(4)_j in period j. That is, θ_i,T_i,j = θ^(4)_j for all i,j where X_ij = 1. Full homogeneity: the treatment effect does not vary by unit, exposure time, or calendar time. The treatment effect is denoted by θ^(5). That is, θ_i,T_i,j = θ^(5) for all i,j where X_ij = 1. Under these assumptions, the expected value given by Corollary <ref> can be simplified for each type of two-by-two DID estimator. The results are given in Table <ref>. §.§ Overall estimator form I propose a class of estimators constructed by a weighted sum of the two-by-two DID estimators D_i,i',j,j' as follows: θ̂ = ∑_i=1^N-1∑_i'=i+1^N ∑_j=1^J-1∑_j'=j+1^N w_i,i',j,j' D_i,i',j,j', with no general restrictions on the weights w_i,i',j,j'. Letting: d = [ D_1,2,1,2 D_1,2,1,3 ⋯ D_1,2,1,J D_1,3,1,2 ⋯ D_N-1,N,J-1,J ]^T be the N2J2-vector of two-by-two DID estimators ordered by unit i, unit i', period j, period j', respectively, and w = [ w_1,2,1,2 ⋯ w_N-1,N,J-1,J ]^T be the corresponding vector of weights, the overall estimator can be written as: θ̂ = w^T d Further, define y = [ Y_1,1 Y_1,2 ⋯ Y_1,J Y_2,1 ⋯ Y_N,J ]^T as the NJ-vector of observed outcomes ordered by unit i and period j, respectively. Finally, let A be the N2J2× NJ matrix such that d = Ay. Note that each row of A corresponds to a unique two-by-two DID estimator and includes two entries of 1, two entries of -1, with remaining entries all 0. An algorithm to generate A for any N and J is given in Appendix <ref>. If the weight vector w is independent of the outcomes y, then: E[θ̂] = w^T E[d] = ∑_i=1^N-1∑_i'=i+1^N ∑_j=1^J-1∑_j'=j+1^N w_i,i',j,j' E[D_i,i',j,j']. This can be simplified under any assumption setting using the results shown in Table <ref>. §.§ Unbiased estimation of target estimand For any assumption setting, denote the vector of all unique treatment effects by θ, with a superscript to clarify the assumption setting if desired. For concreteness, order the treatment effects by cluster i, period j, and exposure time a, respectively, when necessary. Let θ_e be the desired estimand, which must be a linear combination of the unique treatment effects in θ. Then θ_e = v^T θ for some vector v of weights on the unique treatment effects. For example, a vector v with all zero entries except for a 1 in one entry would pick out a single unique treatment effect, and a vector v with equal entries summing to 1 would average over all unique treatment effects. In addition to averages of certain effects, θ_e could also be a difference between two treatment effects, or a more complicated linear combination, as desired. Furthermore, define F as the matrix such that Fθ = E[d] under the specified assumption setting. Since each E[D_i,i',j,j'] under Assumptions <ref>–<ref> and a specified assumption setting is a linear combination of up to four unique treatment effects, such a matrix exists, with all entries either 0, 1, or -1. For any assumption setting and vector of unique treatment effects θ and any target estimand θ_e = v^T θ, θ̂ is an unbiased estimator of θ if F^T w = v. Proof. Let θ̂ = w^T d with w satisfying F^T w = v. Then: E[θ̂] = E [ w^T d] = w^T E[d] = w^T F θ = v^T θ = θ_e, as desired. The existence of a solution and, if one exists, the dimension of the space of solutions can be assessed through rank conditions. Let F, w, v, and d be as defined previously. Then the following are true about the set of estimators of the form θ̂ = w^T d that are unbiased for θ_e under the assumption setting: * If rank(F^T|v) > rank(F^T), then there are no estimators θ̂ of this form that are unbiased for θ_e. * If rank(F^T|v) = rank(F^T) = N2J2, then there is a unique estimator θ̂ of this form that is unbiased for θ_e, defined by the unique w that solves F^T w = v. * If rank(F^T|v) = rank(F^T) < N2J2, then there are infinitely many estimators θ̂ of this form that are unbiased for θ_e. The dimension of unique such estimators is (N-1)(J-1) - rank(F). Proof. The proof relies on the linear algebra results on non-homogeneous systems of linear equations and the rank-nullity theorem (<cit.>, pp. 41 and 89, respectively), as well as the following two lemmata. The full proofs of the theorem and lemmata, are given in Appendix <ref>. Let F and A be as defined previously. Then ker(A^T) ⊂ ker(F^T). Let A be as defined previously for a setting with N ≥ 2 units and J ≥ 2 periods. Then rank(A) = rank(A^T) = (N-1) (J-1). §.§ Minimum variance estimator For settings where there are many unbiased estimators of this form, the investigator would like to select the one with the lowest variance. Again, treating the weights as fixed (i.e., independent of the outcomes), the variance of the estimator for any w is given by: Var(θ̂|w) = Var(w^T d) = Var(w^T Ay) = w^T A Var(y) A^T w. The conversion from the vector of two-by-two DID estimators d to the outcome vector y is used here since the structure of the correlation among different two-by-two DID estimators is not trivial. In general, Var(y) will not be known a priori. For the design and selection of the weights, then, a working covariance matrix can be used, denoted by M. The “working” variance of the estimator is then estimated for a given M by: Var(θ̂|w,M) = w^T AMA^T w. Denote by w^* the weight vector that minimizes Var(θ̂|w,M) under the constraint F^T w^* = v. Note that the variance can then be estimated in the design phase a priori using M. Since w^* is selected among weights that give unbiased estimation regardless of M, misspecification of M results only in reduced efficiency, not bias. Moreover, even for variance minimization, the working covariance structure M does not need to be specified exactly, only up to a constant factor. The features of M that are necessary for the selection of the weights are the correlation within (and, if applicable, between) units across periods and the relative variances of the units and, if applicable, time periods. The working variance matrix can be written as follows: M = m_v^T M_r m_v, where M_r is the working correlation matrix and m_v is the vector of (relative) variances of the individual observations. If the units are independent, then M_r—and thus M—will be block-diagonal, with non-zero entries only for the within-unit covariances only. Several common structures could then be specified for the within-unit covariances: * Independence: assuming independence of observations would require M_r to be the identity matrix. This would generate a diagonal M, where the diagonal entries indicate the relative variances of the different time periods and units. * Exchangeable/compound symmetric: assuming compound symmetry in the within-unit correlation would require M_r to be a block-diagonal matrix, with the non-zero off-diagonal entries all equal to the intra-unit (intra-cluster) correlation coefficient, ρ. * Auto-regressive, AR(1): assuming auto-regressive correlation of order 1 would require M_r to be block-diagonal with non-zero off-diagonal entries equal to the first-order correlation, ρ, raised to a power equal to the difference between the two periods. If the units moreover are exchangeable (e.g., clusters prior to randomization in a SWT), then all of the blocks will be equivalent to one another. More complex variance structures can be implied by specific outcome models as well (see, e.g., <cit.>). §.§ Estimation and inference The estimator is then given by: θ̂^* = w^*Ty. Inference can proceed using an appropriate plug-in estimator Var(y) in Equation <ref>: Var(θ̂^*) = w^*TAVar(y) A^T w^*. Alternatively, inference can proceed using permutation inference (akin to randomization tests in the SWT case as in <cit.>, <cit.>, and <cit.>, or placebo tests in the staggered adoption case as in <cit.>, <cit.>, <cit.>, and <cit.>). §.§ Algorithm for construction of estimator In summary, the estimator θ̂ can be constructed for any setting by the following process. * Based on the number of periods J and the number of units N, identify the matrix A that converts the observations y to the two-by-two DID estimators d. Note that for consistency, the units should be numbered from the earliest treatment adopter to the latest. * Determine the assumption setting (i.e., types of treatment heterogeneity permitted) and the target estimand θ_e. * Based on the assumption setting and the treatment adoption sequences, identify the vector of unique treatment effects θ and the matrix F such that E[d] = Fθ. Identify the vector v such that θ_e = v^T θ. * Find the space of vectors w such that F^T w = v and, if relevant, restrict to the set of weights that give unique estimators. * Determine an appropriate working covariance matrix M using subject-matter knowledge, pilot study data, or an assumed outcome model. Note that this only needs to be specified up to a constant scalar factor. * Find the weight vector w^* among the unbiased solutions w that minimizes the working variance w^T AMA^T w. * Once observed data y are obtained, the estimator is then given by θ̂^* = w^*TAy. * The estimated variance can then be found by Var(θ̂^*) = w^*TAVar(y) A^T w^*, using a plug-in estimator Var(y). Alternatively, permutation-based inference can be conducted. See <http://github.com/leekshaffer/GenDID> for code implementing this algorithm. § TOY EXAMPLE To illustrate the algorithm and the approach described herein, consider first a setting with N=2 units and J=3 periods. The first unit adopts the treatment starting in the second period and the second unit adopts the treatment starting in the third period (i.e., T_1 = 2 and T_2 = 3). There are thus three two-by-two DID estimators: D_1,2,1,2 = (Y_12 - Y_11) - (Y_22 - Y_21) D_1,2,1,3 = (Y_13 - Y_11) - (Y_23 - Y_21) D_1,2,2,3 = (Y_13 - Y_12) - (Y_23 - Y_22) The matrix relating the vector of estimators d = [ D_1,2,1,2 D_1,2,1,3 D_1,2,2,3 ]^T to the vector of outcomes y = [ Y_11 Y_12 Y_13 Y_21 Y_22 Y_23 ]^T is given by: A = [ -1 1 0 1 -1 0; -1 0 1 1 0 -1; 0 -1 1 0 1 -1 ], which has rank 2. I consider several different assumption settings. §.§ Assumption (5): Homogeneity Assuming full treatment effect homogeneity, there is a single treatment effect (θ = θ) and the expectation of the estimators is given by: E[d] = [ θ; 0; -θ ] = [ 1; 0; -1 ]θ≡Fθ. There is only one target estimand of natural interest here so v = 1, yielding θ_e = θ. The ranks of F^T and ( F^T | v) = [ 1 0 -1 1 ] are both 1. By Theorem <ref>, then, the space of unique estimators of this form is one-dimensional. The solution space to F^T w = v is the space of vectors of the form [ x y x-1 ]^T for any real values x,y. One dimension is reduced, however, when right-multiplied by A, which yields: w^T A = [ -x-y 1 x+y-1 x+y -1 -x-y+1 ], which only depends on the single free parameter x+y. A few special cases can be seen for specific values of x+y: * If x+y=1, then the estimator is only the type-2 estimator D_1,2,1,2, the “clean” comparison as described by <cit.> or the “crossover” estimator CO-1 proposed by <cit.>; * If x+y=0, then the estimator uses only the comparison of unit two adopting the treatment while unit one is always treated; and * If x+y=1/2, then the estimator is “centrosymmetrized” as described by <cit.>, using both of the switches equally, equivalent to the horizontal row-column estimator proposed by <cit.> or the “crossover” estimator CO-3 that uses always-treated units as controls in <cit.>. Note that the purely vertical estimator described by <cit.> and <cit.> cannot be constructed as a weighted average of the two-by-two DID estimators. If we assume independent and homoskedastic observations, then we can use M = I_6, the six-by-six identity matrix. The constrained minimization of the variance, then, gives the solution x+y=1/2 and the estimator θ̂^* = D_1,2,1,2 - D_1,2,2,3/2. Note that this corresponds to the “centrosymmetrized” estimator from <cit.> which, as proven therein, has the minimum variance under these assumptions. The same estimator minimizes the variance under a working correlation assumption that is compound symmetric or AR(1), regardless of the correlation value, as long as it is exchangeable and independent across units. §.§ Assumption (4): Calendar-Time Heterogeneity If there is treatment effect heterogeneity only by calendar-time, then there are two unique treatment effects and θ = [ θ^(4)_2; θ^(4)_3 ], where θ^(4)_j is the effect of treatment in period j. Now, F = [ 1 0; 0 0; -1 0 ]. Since the last column only has zero entries, it is impossible to estimate θ^(4)_3. This occurs since no DID comparisons here compare a treated and untreated unit in period 3. Formally, any v that does not have a 0 as the second entry will yield 2 = rank(F^T|v) > rank(F^T) = 1. There is a one-dimensional space of unbiased estimators of θ^(4)_2, however, given by weight vectors of the form [ x 0 x-1 ]^T for real values x; these are similar to those in the previous subsection, but excluding D_1,2,1,3. §.§ Assumption (3): Exposure-Time Heterogeneity If there is treatment effect only by exposure-time (or time-on-treatment), then there are two unique treatment effects and θ = [ θ^(3)_1; θ^(3)_2 ], where θ^(3)_a is the effect of treatment in the ath treatment period. Now, F = [ 1 0; -1 1; -2 1 ]. This is a full-rank matrix, so any linear combination of θ can be estimated without bias. Since rank(F^T) = 2 = rank(A^T), all solutions are unique (in terms of weights on the observations and thus the estimator itself). Letting v = [ 1/2; 1/2 ], so that θ_e = 1/2θ^(3)_1 + 1/2θ^(3)_2, the simple average of the two treatment effects, unbiased estimators have weights of the form w^T = [ 1+y 1/2 - y y ], which gives observation weights w^T A =[ - 3/2 1 1/2 3/2 -1 -1/2 ]. In other words, all unbiased estimators of this form are equivalent to θ̂ = D_1,2,1,2 + 1/2 D_1,2,1,3. Letting v = [ 1; 0 ], so that θ_e = θ^(3)_1, the first treated period effect, unbiased estimators have weights of the form w^T = [ 1+y -y y ], which gives observation weights w^T A = [ -1 1 0 1 -1 0 ]. In other words, all unbiased estimates of this form are equivalent to θ̂ = D_1,2,1,2. Again, this is equivalent to the “crossover” estimator CO-1 from <cit.>, which cannot be centrosymmetrized according to <cit.> because there is treatment effect heterogeneity. Similarly, <cit.> clarifies that the comparison of the switching unit 2 to the always-treated unit 1 between periods 2 and 3 is not unbiased for the first-period effect. Note that the results in this assumption setting do not depend on a working covariance matrix, since the space of unbiased solutions is of dimension zero. In general, this will only occur with few clusters and/or few assumed homogeneities. § DATA EXAMPLE: STEPPED WEDGE TRIAL To illustrate the uses of this approach, consider the stepped wedge trial conducted in 2012 and reported by <cit.> and <cit.>. I use the outcomes reported by <cit.>, assessing the impact on treatment outcomes for patients diagnosed with tuberculosis. Under the control condition, patients were diagnosed by sputum smear examination, while under the intervention condition patients were diagnosed with the XpertMTB/RIF test, hypothesized to be faster and more sensitive in confirming diagnosis and, in particular, diagnosing rifampicin-resistant tuberculosis. This study randomized fourteen clusters into seven treatment sequences, with observations in eight months. The first sequence, consisting of two clusters, began the intervention in the second month; each subsequent month, two more clusters began the intervention (<cit.>). The patient outcomes—unsuccessful treatment of the patient (i.e., any outcome other than “cure and treatment completion without evidence of failure” (<cit.>)—were retrieved using the Brazilian national tuberculosis information system over one year after diagnosis. The original analysis found an insignificant reduction in this composite outcome: an odds ratio of 0.93 with a 95% confidence interval of 0.79–1.08 (<cit.>). Accounting for time trends in the analyses, however, re-analysis by <cit.> using the purely vertical non-parametric within-period method found a significant reduction in these events: an odds ratio of 0.78 with a p-value of 0.02. Similarly significant results were found by <cit.> using the crossover method, which estimated an odds ratio of 0.703 with a permutation-based p-value of 0.046. These results indicate the possible impact that different methods of accounting for time trends and treatment heterogeneity can have. I re-analyze these data using several versions of the generalized DID estimator presented here. I first consider four estimators providing summary treatment effects: * θ̂^(5), the estimate under treatment effect homogeneity (assumption S5); * ∑_j=2^7 θ̂^(4)_j, the average period-specific treatment effect across months 2–7 of the trial under calendar-time heterogeneity (assumption S4); * ∑_a=1^7 θ̂^(3)_a, the average time-on-treatment effect across exposure times 1–7 under exposure-time heterogeneity (assumption S3); and * ∑_j=2^7 ∑_a=1^j-1θ̂^(2)_j,a, the average across all identifiable calendar- and exposure-time combinations of the trial under calendar- and exposure-time heterogeneity (assumption S2). Note that the period-specific treatment effects in month 8 (the last month of the study) are not identifiable since all clusters were in the treated condition at that point. These clusters do contribute to estimation of θ̂^(5) and θ̂^(3)_7, however, under assumptions S5 and S3, as those assumptions allow us to borrow information across different calendar times in estimating common treatment effects. The odds ratio estimates and permutation-test p-values (using 1000 permutations) are shown in Table <ref>. All results shown were calculated using an exchangeable (within-cluster) variance structure with intracluster correlation coefficient ρ = 0.003 as estimated by <cit.>. Results with independent and auto-regressive variance structures are quite similar and shown in Appendix <ref>. Using the (scaled) variance calculated using Equation <ref> also allows a comparison of the relative efficiency of these estimators, even without a specific variance estimate. Under this exchangeable variance structure, the relative efficiencies are 1.05, 2.76, and 1.77, comparing the S4, S3, and S2 estimators, respectively, to the S5 estimator θ̂^(5). This quantifies the variance trade-off that occurs in order to gain the robustness to bias and target a specified estimand under different forms of heterogeneity. To understand the form of the estimator, we can also examine the weights it gives to each observation (note: the weights to each two-by-two DID estimator are also available, but less interpretable as many different weight vectors can yield the same weights on the observations). The weights for the estimator θ̂^(5), again using an exchangeable variance structure, are shown in the heat map in Figure <ref>. Note that the clusters are ordered by the sequence of treatment adoption, and pairs of clusters (e.g., 1 and 2) have nearly identical weights because they are in the same sequence and assumed to be exchangeable. These weights can be compared to the information content of specific cells under different heterogeneity and correlation assumptions as proposed by <cit.> and <cit.>. Note heurestically that in this case, the largest (in magnitude) weights are near the diagonal, followed by the top-right and bottom-left cells, which are also the highest information-content cells as shown in Figure 4 of <cit.>. This method provides flexibility to investigate more specific treatment effects in the heterogeneous settings as well. Considering calendar-time effects under assumption S4, for example, one can estimate the treatment effect in each of months 2 through 7, θ̂^(4)_2, …, θ̂^(4)_7. For additional robustness against exposure-time heterogeneity, one can also estimate month-specific treatment effects under assumption S2 by averaging across exposure times within each of those months: θ̂^(2)_j ≡1/j-1∑_a=1^j-1θ̂^(2)_a,j for all j=2,…,7. The estimation and permutation test results on both the multiplicative (odds ratio) and additive (risk difference) scales are shown in Table <ref>. These results can be compared to the results in Table 2 of <cit.>, which reports period-specific risk differences estimated using the purely vertical non-parametric within-period estimator (note that those are labelled by treatment periods, one less than the corresponding month numbers given here). Differences between the methods arise as the results presented here use horizontal information as well, but target the same estimand. Note that no p-values are adjusted for multiple testing. § DISCUSSION The proposed method provides a highly flexible and adaptable approach to analyzing stepped wedge and staggered adoption settings. In particular, it allows the investigator to specify a target estimand that is any linear combination of unique treatment effects and guarantees unbiasedness under the standard assumptions and the particular treatment effect heterogeneity assumptions specified. Among this class of unbiased estimators, the investigator can then identify the one with the lowest variance for an assumed working covariance structure. Analogous to the working correlation matrix used in generalized estimating equations (<cit.>), this working covariance structure need not be correctly specified to maintain unbiased estimation. A misspecified structure, however, may lead to suboptimal variance and reduced power to detect an effect. The composition of the estimator from two-by-two DID estimators allows both intuitive connections to methods commonly used in econometrics and quantitative social science for quasi-experimental data as well as assumptions that are familiar to those stepped in that literature. In addition, it allows clear formulation of the treatment effect heterogeneity assumptions that are made rather than those assumptions arising implicitly through a linear regression formulation. By specifying the variance assumption separately, it also removes the need for a correct joint specification of the mean and variance models and allows those assumptions (for the working structure) to be formulated independently of the treatment effect assumptions. These approaches also allow for ease of sensitivity analysis, as the expectation can be found using Equation <ref> using a specified weight vector under any other assumption setting. This, along with the potential to assess relative efficiency under the working covariance structure as shown in the example, allows the bias-variance tradeoff to be made explicit, and investigators can consider the appropriate generalizability and interpretation of their target estimands. In the example re-analyzing the data collected by <cit.>, I demonstrated the use of this method for a variety of target estimands in a SWT. Results largely align with those of existing non-parametric estimators, which already had been shown to have desirable bias and inference properties (<cit.>). In nearly comparable situations, like the assumption of a homogeneous effect compared to the inverse-variance weighted average vertical approach (<cit.>) or either crossover-type estimator (<cit.>), this approach had a lower p-value, which may indicate increased power to detect the effect. In addition, many other estimates are tractable using this approach, allowing for flexible pre-specified or exploratory analysis of treatment effect heterogeneity. Although simulation and testing in other settings are still necessary to determine overall operating characteristics of these estimators, it is reasonable that they are more powerful under homogeneity assumptions than purely vertical approaches since they use horizontal comparisons and than pure crossover approaches since they use information from other comparisons with expectation zero. In addition, the fact that the method generally highly weights cells with high information content (<cit.>) and respects centrosymmetry under appropriate assumptions (<cit.>) indicates efficient use of the observed data. However, their performance compared to regression-based approaches (e.g., <cit.>) and existing robust estimators (e.g., <cit.>) remains to be determined. In particular, future study should determine when this approach provides equivalent estimators to any of those approaches (in particular, those proposed by <cit.> and <cit.>) and under what settings it shows more or less desirable operating characteristics to the others. Additional future work should consider identifying other approaches to inference, including the possibility of closed-form variance estimators under certain variance assumptions. Identifying appropriate plug-in estimators for use in Equation <ref>, their robustness to misspecification of the working covariance structure, and the asymptotic distribution would be key, as has been done for mixed effects models with misspecification (see, e.g., <cit.>). This would allow for more targeted design of experiments when analysis will proceed using these methods. Understanding likely covariance structures, especially given planned or conducted approaches for sampling of units (see, e.g., <cit.>), would improve selection of M as well. Additionally, modifications of this method to accommodate adjustment for covariates would be useful for non-randomized staggered adoption settings where parallel trends holds only conditionally (see, e.g., <cit.>) and SWTs with restricted, stratified, or matched randomization (see, e.g., <cit.>). Both stepped wedge and staggered adoption settings are common study designs with important roles to play in assessing a variety of policies and implementation approaches; for example, both have been proposed as useful tools for rapid policy evaluation in the COVID-19 pandemic or similar settings (see, e.g., <cit.>). They each, however, have statistical intricacies that should be carefully considered in the design and analysis (see, e.g., <cit.>). The generalized DID estimator proposed here has desirable bias and variance properties and interpretability that enable its use across many such settings and desired treatment effect estimands. §.§ Data Availability Statement All code used for this manuscript is available at <https://github.com/leekshaffer/GenDID>. Included there is a simulated data set based on this data example. Access policies for the original data set can be found at <cit.>, <https://doi.org/10.1371/journal.pone.0123252>. The author wishes to thank Prof. Anete Trajman for making the data available to the author for re-analysis and Prof. Jennifer Thompson for helpful discussions regarding the data. agsm APPENDICES § DETERMINING THE MATRIX OF DID ESTIMATORS For any integer j ≥ 2, define A_j as the j-1 × j matrix where: A_j = [ -1 1 0 0 ⋯ 0; -1 0 1 0 ⋯ 0; ⋮ ⋮ ⋱ ⋮; -1 0 0 0 ⋯ 1 ] = [ - 1_j-1 | I_j-1, ] where 1_k is the k × 1 column vector of 1's and I_k is the k × k identity matrix. Now define A_· for a setting with J time periods as the J2× J matrix given by: A_· = [ 3cA_J; 0_J-2 2cA_J-1; 0_J-3 0_J-3 A_J-2; ⋮ ; 0_1 ⋯ A_2 ], where 0_k is the k × 1 column vector of 0's. Finally, define A for a setting with J time periods and N clusters as the N2J2× NJ matrix given by: A = [ 2cA_· 2c-A_· 0_J2× (N-2) J; 2cA_· 0_J2× J -A_· 0_J2× (N-3) J; 2cA_· 0_J2× 2 J -A_· 0_J2× (N-4) J; ⋮ ; 2cA_· 0_J2× (N-2)J 2c-A_·; 0_J2× J A_· 2c-A_· 0_J2× (N-3) J; 0_J2× J A_· 0_J2× J -A_· 0_J2× (N-4) J; ⋮ ; 0_J2× (N-2) J 2cA_· 2c-A_· ], where 0_k × k' is the k × k' matrix of 0's. Note that, in the above representation, each row represents J2 rows. In each of these, (N-2) J columns are from 0 matrices, while the other 2J are from two copies of the A_· matrix. This is the matrix that, when multiplied (on the right) by the column vector of outcomes Y, gives the column vector of two-by-two DID estimators D. In particular, D_i,i',j,j' will be found on the following row number of the vector D: 1 + J2 (i-1) ( N-i/2) + (i' - i - 1) J2 + (j-1) ( J-j/2) + (j' - j -1) Conversely, row k in the vector D, for 1 ≤ k ≤J2N2, corresponds to the two-by-two DID estimator D_i,i',j,j' where the indices are given by the following algorithm: * Let c_1 = k/J2 + 1 and c_2 = k J2 + 1 * Let n^* be the minimum n ∈{1,2,…,N} such that c_1 ≤∑_ℓ=1^n (N-ℓ). Then i=n^* and i' = i + c_1 - ∑_ℓ=1^n^*-1 (N-ℓ). * Let m^* be the minimum m ∈{1,2,…,J} such that c_2 ≤∑_ℓ=1^m (J-ℓ). Then j=m^* and j' = j + c_1 - ∑_ℓ=1^m^*-1 (J-ℓ). Each row k can then be associated with the corresponding D_i,i',j,j'. From the values of T_i and T_i', along with j and j', the type (from 1–6) of this two-by-two DID estimator can then be determined. And its expectation under the desired assumption setting can be determined as well. Thus, the J2N2× 1 column vector E[D] and associated matrix F can be constructed in terms of the estimands possible for the assumption setting. § PROOFS OF KEY LEMMATA AND THEOREMS §.§ Lemmata for Proof of Theorem <ref> Lemma <ref> Let F and A be as defined previously. Then ker(A^T) ⊂ ker(F^T). Proof. Let x∈ ker(A^T); that is, A^T x = 0. Then d^T x = ( Ay)^T x = y^T A^T x = y^T 0 = 0 for any y. Since this is true for any y, it must be true in expectation, so 0 = E[d^T x] = E[d^T] x = E[d]^T x = ( Fθ)^T x = θ^T F^T x. For this to be true for any vector of treatment effects, then, F^T x = 0 and x∈ ker(F^T). So ker(A^T) ⊂ ker(F^T), as desired. Lemma <ref> Let A be as defined previously for a setting with N ≥ 2 units and J ≥ 2 periods. Then rank(A) = rank(A^T) = (N-1) (J-1). Proof. For any i < i', j < j', let A_i,i',j,j' be the row of A corresponding to D_i,i',j,j' (i.e., A_i,i',j,j'^T y = D_i,i',j,j'). A_i,i',j,j' has the value 1 in the columns corresponding to Y_ij' and Y_i'j, -1 in the columns corresponding to Y_ij and Y_i'j', and 0 in all other columns. Consider the matrix A^* composed of only the rows of the form A_1,i',1,j' of A, where 2 ≤ i' ≤ N and 2 ≤ j' ≤ J. This is a (N-1)(J-1) × NJ matrix. Any row A_i,i',j,j' can be expressed as a linear combination of the rows of A^* as follows: A_i,i',j,j' = ( A_1,i',1,j' - A_1,i,1,j') - ( A_1,i',1,j - A_1,i,1,j). Thus, rank(A) = rank(A^*). Moreover, the rows of A^* are linearly independent, since each row of the form A_1,i',1,j' is the unique row of that form to have a non-zero entry in the column corresponding to Y_i'j'. So rank(A) = rank(A^*) = (N-1)(J-1). Since A is a real matrix, rank(A^T) = rank(A) = (N-1)(J-1) as well. §.§ Proof of Theorem <ref> Theorem <ref> Let F, w, v, and d be as defined previously. Then the following are true about the set of estimators of the form θ̂ = w^T d that are unbiased for θ_e under the assumption setting: * If rank(F^T|v) > rank(F^T), then there are no estimators θ̂ of this form that are unbiased for θ_e. * If rank(F^T|v) = rank(F^T) = N2J2, then there is a unique estimator θ̂ of this form that is unbiased for θ_e, defined by the unique w that solves F^T w = v. * If rank(F^T|v) = rank(F^T) < N2J2, then there are infinitely many estimators θ̂ of this form that are unbiased for θ_e. The dimension of unique such estimators is (N-1)(J-1) - rank(F). Proof. By results on non-homogeneous systems of linear equations (also known as the Rouché-Capelli Theorem; see <cit.>, p. 41) on the equation F^T w = v, there are two possibilities: * If rank(F^T|v) > rank(F^T), then there are no solutions. * If rank(F^T|v = rank(F^T), then there is a solution to the system, and the affine space of solutions W has dimension N2J2 - rank(F^T), since F^T is of dimension |θ|×N2J2, where |θ| is the number of unique treatment effects. If rank(F^T) = N2J2, then, there is a single unique solution w, which corresponds to a single unique estimator θ̂ of the desired form that is unbiased for θ_e. If rank(F^T) < N2J2, there are infinitely many solutions w. Two distinct solutions w_1 ≠w_2, however, may correspond to the same estimator θ̂, since different weightings of the two-by-two DID estimators may result in the same weightings of the underlying observations. Let w_1 ∈ W be a solution to F^T w = v. Any other solution to F^T w = v can be expressed as w_1 + x, where x∈ ker(F^T). The vector space of ker(F^T) has dimension nullity(F^T) = N2J2 - rank(F^T), corresponding to the dimension of the affine space W as found above. By Lemma <ref>, ker(A^T) ⊂ ker(A^T), and so ker(F^T) can be written as the direct sum of ker(A^T) + K, where K is defined as the orthogonal complement of ker(A^T) within ker(F^T). So we can further express W = {w_1+x_1+x_2: x_1 ∈ ker(A^T), x_2 ∈ K}. However, A^T x_1 = 0 if x_1 ∈ ker(A^T). Hence, for any w∈ W and any x_1 ∈ ker(A^T), w^T A = (w + x_1)^T A, and so the estimators θ̂_1 and θ̂_2 defined by the weight vectors w and w + x_1 are equal for all y. Thus, the only unique estimators (in terms of the underlying observations) are given by the subspace W_A = {w_1 + x_2: x_w ∈ K}. Thus, the affine space of unique estimators θ̂ that are unbiased for the desired θ_e under the specified assumption setting has dimension given by: dim(K) = dim(W) - dim(ker(A^T)) = nullity(F^T) - nullity(A^T). This can be further simplified using the rank-nullity theorem (see <cit.>, p. 89) and Lemma <ref>: dim(K) = dim(W) - dim(ker(A^T)) = nullity(F^T) - nullity(A^T) = ( N2J2 - rank(F^T) ) - ( N2J2 - rank(A^T) ) = rank(A^T) - rank(F^T) = (N-1)(J-1) - rank(F), as desired. § SUPPLEMENTARY RESULTS
http://arxiv.org/abs/2405.09649v1
20240515182715
Challenges and opportunities for digital twins in precision medicine: a complex systems perspective
[ "Manlio De Domenico", "Luca Allegri", "Guido Caldarelli", "Valeria d'Andrea", "Barbara Di Camillo", "Luis M. Rocha", "Jordan Rozum", "Riccardo Sbarbati", "Francesco Zambelli" ]
physics.bio-ph
[ "physics.bio-ph", "nlin.AO", "q-bio.QM" ]
0.0cm 0.2cm 16cm 21cm 1.0cm sciabstract lastnote scilastnote lastnote+1 lastnote. ( [ 24pt Challenges and opportunities for digital twins in precision medicine: a complex systems perspective Manlio De Domenico ^1,2,3∗, Luca Allegri^1, Guido Caldarelli^4,5,6, Valeria d'Andrea^1, Barbara Di Camillo^2,7,8,Luis M. Rocha^9,10, Jordan Rozum^9, Riccardo Sbarbati^1, Francesco Zambelli^1 ^1Department of Physics and Astronomy “Galileo Galilei”.University of Padova, Padova, Italy ^2Padua Center for Network Medicine, University of Padua. ^3Padua Neuroscience Center, University of Padua. ^4DSMN and ECLT Ca' Foscari University of Venice, Italy. ^5Institute of Complex Systems (ISC) CNR unit Sapienza University, Rome, Italy. ^6London Institute for Mathematical Sciences, Royal Institution, London, UK. ^7Department of Information Engineering, University of Padua, Italy ^8Department of Comparative Biomedicine and Food Science, University of Padua, Italy ^9 Department of Systems Science and Industrial Eng., Binghamton University, Binghamton, NY, USA. ^10 Instituto Gulbenkian de Ciência, Oeiras, Portugal. ^∗Corresponding author. E-mail: manlio.dedomenico@unipd.it ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The adoption of digital twins (DTs) in precision medicine is increasingly viable, propelled by extensive data collection and advancements in artificial intelligence (AI), alongside traditional biomedical methodologies. However, the reliance on black-box predictive models, which utilize large datasets, presents limitations that could impede the broader application of DTs in clinical settings. We argue that hypothesis-driven generative models, particularly multiscale modeling, are essential for boosting the clinical accuracy and relevance of DTs, thereby making a significant impact on healthcare innovation. This paper explores the transformative potential of DTs in healthcare, emphasizing their capability to simulate complex, interdependent biological processes across multiple scales. By integrating generative models with extensive datasets, we propose a scenario-based modeling approach that enables the exploration of diverse therapeutic strategies, thus supporting dynamic clinical decision-making. This method not only leverages advancements in data science and big data for improving disease treatment and prevention but also incorporates insights from complex systems and network science, quantitative biology, and digital medicine, promising substantial advancements in patient care. § INTRODUCTION Precision medicine aims at delivering diagnostic, prognostic and therapeutic strategies specifically tailored to individuals by explicitly accounting for their genetic information, lifestyle and environment <cit.>, all organised in a network structure<cit.>. The success of this approach relies on at least two fundamental and non-trivial assumptions. The first assumption is that it is possible to predict, by means of computational, cellular, and organism models and to some level of accuracy, the response of a patient to a specific treatment. The second assumption is that it is possible to use heterogeneous data sources (multiomics, electronic health records, individual and social behavior, and so forth) to build massive databases with enough statistics to allow one to stratify a population with respect to characterizing and distinctive features of clinical interest <cit.>. It is not surprising that the field of precision medicine is growing <cit.>, attracting the interest of national health systems for investments <cit.> and of scholars spanning a wide range of disciplines, from molecular biology to computer science, medicine, physics, and engineering. Nevertheless, precision medicine, with its revolutionary promises, is usually associated with clinical genomics <cit.> and multiomics <cit.>, with a strong focus on the idea that combining those heterogeneous, multi-scale sources of data will lead to timely predictions about individual medical outcomes. More recently, the attention shifted to the possibility of integrating such molecular data with traditional <cit.> and non-traditional <cit.> data sources of clinical relevance into a multiscale predictive modeling methodology known as a digital twin, that allows for testing therapeutic strategies in-silico with the ultimate goal of maximizing successful treatments and outcomes. The first pioneering precursors of digital twins for personalized medicine came out in the early 2000’, proposing the idea of models of the human body for specific patients to improve clinical practices. They pointed out challenges that are still actual, such as the need of a structure for multiple sources data integration <cit.> and the importance of having solid mathematical models able to describe the system at the desired level of precision <cit.>. In the last years medical digital twins (MDT) have experienced a huge increase of interest, with the birth of many programmes devoted to them <cit.>. Some of the most significant successes in the field are the “artificial pancreas” or ARCHIMEDES program on diabetes <cit.>, or mechanistic models of the heart used for cardiovascular disease monitoring and prevention <cit.>. Recent works emphasize the potentiality of having a comprehensive model for the human body that could help to understand what could be the possible consequences of a perturbation, for example caused by a viral infection  <cit.>, or the influence of specific drugs <cit.>, on a specific patient. About the actual implementation of these models, network and complex systems are starting to be considered <cit.> after more than one decade from first proposals <cit.>, while approached based on AI and ML are widely adopted despite some limitations and critical aspects <cit.>. Given the broad spectrum of definitions and applications, it is important to set the operational definition that will be adopted throughout this paper: [colback=gray80, colframe=black, boxrule=0.5pt] A digital twin is an in-silico framework which can be used to replicate a biological cell, sub-system, organ or a whole organism with a transparent predictive model of their relevant causal mechanisms which responds in the same manner to interventions. Broadly speaking, a digital twin exchanges data with its real-world counterpart, synchronizing inputs and outputs, and together, they operate synergistically, with the digital twin informing, controlling, aiding, and augmenting the original system. In fact, in the recent years the interest on digital twins exploded well beyond medicine, due to the increasing accessibility to memories, computational power and massive data gathering. Primarily utilized to simulate the intricate infrastructural configurations, they have been also applied to cities <cit.> and products <cit.>, leveraging contemporary technologies such as data analytics, IoT-driven physical modeling<cit.>, machine learning and artificial intelligence. For instance, in the case of cities <cit.>, it has been argued that it can be far more efficient to consider the emerging behaviour arising from the intricate web of relationships, processes and correlations that characterize a complex adaptive system <cit.>, rather than reproducing a mere copy of it. The comparison with current methodological advances and challenges in other fields, allows us to highlight the existing challenges in the case of precision medicine. Despite promising opportunities to create a digital copy of every individual to allow for personalized analysis and test individual-specific therapeutic strategies <cit.> there are some caveats that might be considered. On the one hand, if digital twins must be designed to be a perfect replica of an individual, then the amount of required data vastly overcomes our present, and even the future, possibilities. The gigantic number of intervening functional units, from biomolecules to cells, makes any analytical or computational approach impossible. Even in the ideal case that a perfectly functioning computational framework was technologically accessible, the nonlinear dynamics of interacting biological units leads to emergent phenomena that cannot be simply simulated or predicted, a landmark feature of complex systems <cit.>. In fact, recent advances in predictive biology are based on building models of increasing complexity to reproduce the most salient characteristics of complex biological processes in engineered and natural populations <cit.>. On the other hand, human patients have their own dynamical response to internal dysfunctions or differentiated coupling to the environment, including individual histories of host-microbiome and host-pathogen interactions, that might jeopardize any predictive model. Even more widely, the full individual exposome includes all past exposure to specific multi-scale environmental factors, such as diet and reactions to stressful biochemical or social conditions <cit.>. While the causal mechanisms in multiomic regulation can be partially reconstructed and accounted for, the full individual exposome is almost impossible to replicate or reproduce with a digital twin. We have made great strides in capturing the exposome via collection of new types of data, such as mobile devices <cit.> and social media <cit.>, However, even in the most ideal cases, unknown factors such as the level of disease progression and unmeasured lifestyle changes can lead to a broad set of distinct outcomes that make the design of digital twins very sensitive to the quantity and accuracy of input data. This technology may struggle to adapt and accurately predict these dynamic changes, leading to sub-optimal personalized treatment recommendations. The aforementioned potential issues can dramatically hinder the purpose of digital twins, suggesting that only methods based on advanced statistical data analysis, such as the one based on machine learning, are viable. However, this is not the case, since such methods provide predictive models that (i) do not easily generalize to situations and conditions for which they have not been trained for, and (ii) might recommend solutions that are clinically sub-optimal when retrieving multiple outcomes which are similarly ranked by the algorithm (Fig. <ref>). Therefore, a more comprehensive approach based on methods capturing the essential features of complex interconnected and interdependent systems <cit.> at many scales is needed. This approach needs to: (i) reduce the dimensionality of the problem of interest, by identifying the key biological, clinical and environmental variables to use for an adequate description on short time scales; (ii) characterize under which conditions a complex adaptive system like the human body (or even a cell line) can be simulated by a digital twin in terms of separated components and/or sub-systems; (iii) provide a transparent computational framework for testing actionable intervention strategies based on what-if scenarios and clinically relevant, model-informed, data-driven and evidence-based questions. In short, this calls for more holistic and quantitative approach based on the complex adaptive nature of every patient rather than a mere replica of their salient aspects for statistical analysis. § MULTISCALE MODELING IN HEALTH AND DISEASE: FROM GENES TO SYSTEMS Accounting for the multiscale nature of biological systems is of paramount importance for designing effective digital twins. The recent progresses in study of complex systems, especially the ones with interconnected and interdependent structure, dynamics and function, provide a promising ground for figuring out and illustrating how diverse functional units and sub-systems interact at different scales. Indeed, in addition to extracting multiscale molecular details from large omics datasets (e.g., transcriptomic, genomic, metabolomic, and microbiomic), we can now extract large-scale human behavior data of biomedical relevance from social media, mobile devices and electronic health records, including new patient-stratification principles and unknown disease correlations <cit.>. Accordingly, the holistic integration and analysis of such multiscale data sources constitutes a novel opportunity to further improve personalization by including the exposome in the study of multilevel human complexity in disease <cit.>, and used to inform more accurate models for predictive purposes in biomedicine <cit.>. At the lowest scale, gene regulatory networks are systems of interacting genes and their regulatory elements within a cell controlling the level of gene expression. Usually, in those networks the nodes represent genes and edges represent regulatory interactions between them, and they describe the timing, spatial distribution, and intensity of gene expression, thereby orchestrating various cellular processes such as development, differentiation, and response to environmental stimuli <cit.>. A protein-protein interaction (PPI) network captures distinct types of interactions (e.g., physical contacts) between proteins in a cell. In PPI networks, nodes represent individual proteins, and edges encode interactions between them: they can be transient, like in signal transduction, or more stable, such as in the formation of protein complexes <cit.>. PPI networks provide insights into cellular processes, functional associations, and the modular organization of proteins, and analyzing the structure and dynamics of PPI networks helps uncover the underlying principles of cellular organization and function <cit.>. Metabolic networks <cit.> map out the biochemical reactions occurring within an organism, detailing how individual metabolites are synthesized, degraded, and interconverted <cit.>. These networks are either composed of nodes representing metabolites and edges indicating the enzymatic reactions facilitating the transformation from one metabolite to another or bipartite networks where nodes are chemical species on one side and reactions on the other. In this latter representation, the web of metabolic interactions is more intricately woven, while in the former it is more straightforward. Beyond individual reactions, these networks highlight the interconnected nature of metabolic pathways, revealing redundancies, feedback loops, and regulatory mechanisms that maintain cellular homeostasis. Intracellular networks have time-evolving states that describe which genes are active, which proteins are present (or phosphorylated, oxidized, ubiquitinated, etc.), the concentrations of metabolites, and so on. State evolution is often studied using ODE models, which can be fit to match experimental state and kinetic data <cit.>. In many cases, the available data is insufficient to fully constrain the parameters of an ODE model, or, which is often the case, the underlying biological dynamics is of a threshold nature <cit.>. In these cases, a discrete causal model, such as Boolean networks (or multistate automata networks more generally), may be appropriate <cit.>. In a Boolean network, the state of each node in the intracellular network is binarized: a gene is either active or inactive, and the active form of a protein is either above some unspecified threshold of abundance or below it. The binarized states change in time according to logical (Boolean) update functions, i.e., each network node is an automaton <cit.>. The causal effect of various interventions (e.g. drugs) can be evaluated by manipulating the states of individual nodes and observing the resulting dynamics. Since Boolean automata can be grouped to model variables with more than two states, the approach is widely applicable to model cellular components with various levels of activation, e.g. proportion of cells that enter apoptosis in breast cancer cell lines <cit.>. A common application of these models is to study the effects of combinatorial drug interventions, particularly in the context of cancer <cit.>. To serve as a component of a digital twin, Boolean networks must reconcile their discrete time steps with physical time. This is often done by updating node states asynchronously according to tunable node transition rates, essentially treating the dynamics as a continuous Markov process <cit.>. This approach has been applied, for example, to suggest personalized drug therapies for prostate cancer patients using personalized Boolean network models <cit.>. This discrete dynamics approach has also been used to infer important dynamical pathways in multilayer networks, tying molecular factors (from multiomics, brain and retinal imaging data) to clinical phenotype (from patient data) in multiple sclerosis <cit.>. This is an exciting avenue that allows complex regulatory dynamics to be studied on static multilayer networks obtained from heterogeneous data sources, whereby each node can integrate incoming signals differently—going well beyond the typical analysis via spreading or information dynamics on networks. It is important to note that automata network models can typically be greatly simplified by reducing dynamically redundant interactions <cit.>, due to the ubiquity of canalized dynamics in biology <cit.>. This results in scalable causal models capable of uncovering actionable interventions, conditioned on different input assumptions, in a transparent manner <cit.>. Boolean networks are especially amenable to causal analysis because they can be converted to simplified causal representations (according to Boolean minimization criteria) <cit.>, standing in stark contrast to the black-box predictions of traditional machine learning methods or tallying the outputs of Monte Carlo simulations of large dynamical models (including non-simplified Boolean Networks.) Thus, automata network models—whose parameters can be inferred and validated from perturbation experiment, multiomics, and exposome data—are ideal components to consider for the top level of digital twins, as they synthesize the large-scale underlying data into simplified, explainable, causal networks amenable investigate actionable interventions. Indeed, these features show how this modeling approach directly responds to the digital twin approach needs identified in the introduction: dimensionality reduction, scalable modularity, and transparency. Whether discrete or continuous, the dynamics of intracellular networks can be coupled with each other and with physical processes to produce whole-cell models, which attempt to describe the whole genome, proteome, and metabolome of a cell over the course of its life cycle in a fine-grained dynamical model <cit.>, as was first demonstrated in the human pathogen Mycoplasma genitalium <cit.>. More recent efforts have been focused on identifying minimal genomes <cit.> or modeling organisms with larger genomes, such as E. Coli <cit.>. Currently, the biomedical application of such detailed models is limited by the enormous effort required to construct them. Fortunately, it is often the case that only specific processes need to be incorporated to build a medically relevant digital twin. Narrowing the focus of the model at the cellular level makes model construction and personalization more feasible, lowers computational barriers, and facilitates embedding these models into multicellular models, e.g. <cit.>. An interesting focus is given by single cell data analysis. Indeed, tissues are complex multi-agent systems made of multiple subpopulations of cells that, even of the same type, exhibit different system state and expression profiles and that are spatially and temporally organized and able to communicate and interact with each other and to orchestrate self-assembly and response to stimuli as a whole. This is fundamental in many biological contexts, such as early embryonic development and tumor etiology, where different cells are characterized by distinctive genetic mutations and/or expression profiles. These differences are regulated by cell-to-cell communication and underlie complex dynamic responses characterizing healthy and pathological tissue development <cit.>. An example of how the interaction between cells can be modelled to describe emergent behaviors is provided by agent-based models applied to study the interaction dynamics between immune and tumor cells in human cancer. Coupling a discrete agent-based model with a continuous partial differential equations-based model, these models capture essential components of the tumor microenvironment and are able to reproduce its main characteristics. Each tumor is characterized by a specific and unique tumor microenvironment, emphasizing the need for specialized and personalized studies of each cancer scenario. Recently, a model of colon cancer has been proposed that can be informed with patient transcriptomic data <cit.>. It would be interesting to extend the model by informing it through methods that infer cellular communication <cit.>. This would have the advantage of characterizing the tumor environment more specifically by defining the probability of an agent's action in response to received communication. At the tissue scale, neural, cardiovascular, and respiratory systems are a few examples of systems that need to be considered. However, system dysfunctions, such as cancer, should also be incorporated. Several studies have focused on examining neural connections within an organism's brain, commonly referred to as the connectome <cit.>. Whether investigating the intricate networks within the human brain or the simpler wiring maps of organisms like C. elegans or Drosophila Melanogaster, the objective remains consistent: elucidating the interconnections and organization of neurons and regions at the mesoscale. Functional imaging techniques are utilized to explore the relationship between activity in specific brain areas <cit.>. The analytical and computational tools from network theory allow to build maps of structural and functional connections, revealing characteristics of complex networks—such as small-world topology, highly connected hubs, and modularity—that manifest at both the whole-brain and cellular scales <cit.>. The human brain is an emblematic case study for the design of ambitious computational models towards digital twins. Nevertheless, despite the aforementioned significant advancements, the explicit goal of building a realistic computer simulation of the brain within a few years has not met expectations <cit.>. Overall, the aforementioned sub-systems are part of a broader complex, adaptive, interdependent system of systems which are organized in hierarchies of increasing complexity with modular organization <cit.>. This is a fact well recognized since at least half a century, summarized in the Jacob's statement that “every object that biology studies is a system of systems” <cit.>. Such sub-systems exchange information (e.g., in terms of electrical, chemical and electrochemical signals) to regulate each other and operate out of equilibrium <cit.>. Consequently, considering any sub-system in isolation from the other ones provides an incomplete representation of each sub-system, leading to inaccurate models and predictions of biological processes. A partial solution to this problem comes from the statistical physics of multilayer systems, allowing one to describe each scale by a level of organization, whereas each level (i.e., at the same scale) can be characterized by multiple contexts, namely layers <cit.>. Levels can be interdependent with each other <cit.> while being characterized by different contexts. In the case of biological systems <cit.>, this is reflected in the distinct type of interactions among the same set of biomolecules or the distinct channels available for cell-cell communication, as well as in the interdependence between distinct systems such as the cardiovascular and nervous ones (Fig. <ref>). This web of interconnections and interdependencies involving diverse and heterogeneous functional biological units across scales play a pivotal role in human health, and it is plausible to associate their dysfunction to disease states <cit.>. § CHALLENGES IN MULTISCALE MODELING Multiscale modeling of biological systems presents formidable challenges, primarily due to the intricate and redundant networks of interactions and interdependent processes taking place and unfolding across different scales, from molecular (microscopic) to organismic (macroscopic) levels. These systems are characterized by dynamic processes that operate far from equilibrium, exchanging various types of signals – e.g., chemical, electrochemical, and more – thereby creating a complex ecosystem of interlinked dynamical processes <cit.>. Such complexity poses significant difficulties in developing models that are both significant and coherent, avoiding extremes like reductionism, which assumes that sufficient computational power can simulate an entire organism, or oversimplification, which relies excessively on abundant data to sidestep the need for intricate modeling. Moreover, biological systems are inherently adaptive, adjusting dynamically to environmental changes <cit.>. This adaptiveness is crucial for accurately simulating the impact of external factors such as therapeutic interventions or changes in environmental conditions like pollution <cit.> or alterations in food sources <cit.>. Responses to these changes start at the cellular level, influencing gene and protein expressions (or lack of, that can even trigger the insurgence of cancer <cit.>), and extend to higher biological structures through complex signaling pathways involving ligands and receptors. Such adaptive complexity must be integrated into models to accurately reflect the biological response to external stimuli within the spatial and temporal scale of interest. In the broader context of precision medicine, integrating digital twins that reflect these multiscale and adaptive features poses even additional challenges. The models often employed are predominantly phenomenological, focusing more on observed phenomena rather than the underlying mechanisms, an approach resulting in a significant gap in our mechanistic understanding, which is essential for bridging various biological scales effectively. Drawing again a parallel with urban system digital twins might offer new perspectives and strategies: cities face similar multiscale integration challenges <cit.> and require a similar framework for addressing the complex interplay of different components within a living system, potentially guiding the development of more effective biomedical models. By critically analyzing these challenges through the lens of complexity science, we can better understand and possibly overcome the hurdles in creating cohesive and predictive multiscale models that are crucial for the future of biomedical research and therapeutic development. In the case of interconnected systems at a given scale, one can introduce a suitable object named multilayer adjacency tensor M^iα_jβ(t), to operationally encode all the interactions at time t between a biological unit i (e.g., a single protein or a protein complex) in a layer α (e.g., a class of biological processes or a pathway) and another biological unit j (e.g., another protein or a metabolite). The framework is so general that it allows to include also cross-layer structural interactions, if any. In fact, due to the high number of interacting units (such as biomolecules, cells, etc), biological modeling often assumes such deterministic processes, such as that reactions occur at constant rates, compartmental interactions are fully-mixed or mean-field approximations apply. Therefore, even at some good level of approximation, the dynamics of some quantity 𝐱(t) of interest – e.g., the concentration of metabolites or the population of some species (e.g., cancer cells, bacteria, etc) – might be described by multilayer differential equations <cit.> like ∂ x_jβ(t)/∂ t = f_jβ(x_jβ,t) + ∑_i∑_α g_jβ[M^iα_jβ(t), x_iα(t), x_jβ(t) ,t], where f_jβ(·) is a function only of the variable x_jβ(t) corresponding to a specific unit j in a specific context or layer β, whereas g_jβ(·) is a function accounting for the interactions between pairs of units, i.e., for the effects due to the intervening networks. It is remarkable how such a simplified deterministic framework can allow to model some response to clinical treatment triggered by basic chemical reactions, as well as that the value of PH, the actions of cells that can be triggered and even the production of proteins that can be stimulated by acting on specific part of DNA or by specific mRNA targets <cit.>. In the light of these simple arguments, it might be tempting to rely only on such deterministic approaches – based on sets of differential equations, such as Eq. (<ref>), or on agent-based modeling – to predict the behavior of a therapeutic intervention. After all, if we have systematic cause-effect relations linking interventions to biological and clinical outcomes, it would be enough to calibrate our models on the specific features of a patient to determine their response to treatments and potentially cure a disease. However, in complex and variable environments such as a living organism, adaptiveness, randomness and biological noise might affect the model outcomes. Adaptiveness can be still reflected by such simplified models: if we indicate with u_jβ(t) some external input signal or control applied to a biological system and with Θ the set of parameters that dynamically change based on the system's states or external inputs, then a more general model at a given scale could be formalized as ∂ x_jβ(t)/∂ t = f_jβ(x_jβ,t) + ∑_i∑_α g_jβ[M^iα_jβ(t), x_iα(t), x_jβ(t), Θ, u_jβ(t), t] ∂ M^iα_jβ(t)/∂ t = ℓ(M^iα_jβ(t), Θ, u_jβ(t), t) ∂Θ(t)/∂ t = h(Θ, u_jβ(t), t) which is much more complicated than Eq. (<ref>), but it can be still managed from a computational point of view. Noise can be inherent to one or more aspects of the involved systems – e.g., biochemical and electrochemical variability – or being linked to specific mechanisms altered by internal or external perturbations, such as virus-host interactions, environmental changes, so forth and so on. Accordingly, including the effects of noise depends on the scale and impact of the biological process being modeled. For instance, including DNA replication errors for the analysis of short-term effects of a therapeutic drug might not add relevant biological or clinical insights, while adding complexity to the model. Another emblematic case is the use of discretized structures, such as networks, to model processes that are manifestly continuous (e.g., in space): in such conditions, using complex networks will introduce a level of sophistication that is not necessary to gain insights about a biological process. These noise sources introduce an additional level of stochasticity that cannot be easily taken into account by statistical models, even the most sophisticated ones based on machine learning. Nevertheless, what it usually assumed to be a bug might be a feature: as for other complex systems in nature, stochasticity is indeed structured and can lead to self-organized behaviors and processes <cit.>. The theory of nonlinear dynamical systems and the statistical physics of complex networks provide suitable theoretical and computational frameworks to model such complex biological phenomena <cit.>, and should be considered as essential ingredients to design reliable digital twins, either specialized or not, for any living organism. Nevertheless, the most important obstacle to describe realistic biological systems relies on incorporating multiple dynamic processes across the multiple intervening scales, primarily due to the diverse nature of the laws governing these processes at each scale. One significant technical challenges is effectively bridging these scales. This involves not just scaling up or down the processes, but also ensuring that interactions between scales are accurately captured. This might involve developing intermediate models or using scale-bridging techniques like homogenization or coarse-graining, which themselves can introduce approximation errors or require simplifications that might affect model accuracy. While some models are based on fundamental laws – such as reaction-diffusion processes for chemical networks – other models are genuinely phenomenological: reconciling dynamics of so different nature is challenging, since the latter class of models might not be suitable to capture novel phenomenology. This problem can only be partially solved by developing more fundamental models, since biological processes are characterized by emergent phenomena that cannot be directly deduced even from having full knowledge about their units and their interactions <cit.>. To this aim, one needs to account, simultaneously, for the evolution of the system according to dynamics similar to the one in Eq. (<ref>) and the fact that the underlying mechanisms can change while satisfying the constraints imposed by physics and chemistry, requiring meta-dynamical models <cit.>. Additionally, multiscale models often require extensive parameterization, which can be difficult when experimental data are scarce at certain scales: therefore, validating these models across all scales can be exceptionally challenging, especially when direct observations or experiments at certain scales are not feasible or provide, at best, indirect measurements (such as correlations) about the phenomenon of interest that require an adequate inferential framework <cit.>. Furthermore, such models should be able to propagate perturbations from one scale to another to realistically mimic the behavior of a living organism. As previously discussed, the possibility that a perturbation at the lowest scale (e.g., a random mutation or a mRNA intervention) can alter biological processes at larger scales is a mandatory feature for any reliable design of a digital twin. § DISCUSSION AND OUTLOOKS Innovative approaches for model integration within digital twins have a huge transformative potential for applications to precision medicine, enabling a synergy between generative modeling, advanced AI and machine learning techniques, and traditional biomedical insights. The fusion of these techniques, rather than the choice of a specific one, is expected to facilitate the development of new frameworks for multiscale modeling, which are pivotal in capturing the intricate dynamics of pathogenesis in humans. Through these frameworks, the overarching goal is to resolve the previously identified challenges, significantly enhancing the accuracy and clinical relevance of digital twins beyond inductive modelling via advanced statistics. On the one hand, the integration of mechanistic models into digital twins also addresses the challenges of parameter indeterminacy and overfitting, which are prevalent in systems characterized by vast parameter spaces. By constraining these spaces, e.g. via the coarse-grained dynamics afforded by multiscale automata network models that synthesize large-scale data about biological mechanisms, digital twins not only gain in robustness and explainability but also offer a more reliable foundation for the simulation of therapeutic outcomes, thereby increasing their utility in clinical practice. On the other hand, it is also worth discussing what is missing in current technologies and techniques developed for the same aim. For instance, a critical advantage of digital twins over state-of-the-art non-computational models, such as organoids <cit.>, is their capability to simulate complex, interdependent processes across multiple biological scales effectively, while providing explanatory and causal understanding and control at relatively small costs. While organoids can be engineered with all the power of modern synthetic biology <cit.> to recapitulate features of the function and responses of complex biological mechanisms of the corresponding in vivo target, they have important limitations. Certainly, reproducibility is a major bottleneck <cit.>, where digital twins can excel, especially if built under an open-source framework. Additionally, organoids do not yet capture the entire physiological repertoire of cell types, even the behavior that is relevant for a particular disease. This means, for instance, that the response to drugs or other interventions need to be studied for organoids per se, separately from the in vivo target. Related to this problem, is the relatively limited range of heterogeneity in response, which one needs to develop true personalized twins <cit.>. Finally, while organoids are more direct analogues of biomolecular mechanisms, they cannot incorporate simultaneously the multiple scales and historical information about patients, including the microbiome and exposome, which are major factors in complex diseases such as cancer, depression and many chronic diseases. This is where the comprehensive multiscale network- and data-driven digital twin approach is particularly crucial. Many complex diseases unfold across various multiomic sub-systems and exposome history. Modular computational architectures that can synthesize and integrate multiple subsystems as separate network layers or agent-based models are well within the realm of possibility. They might require a robust non-specialized digital twin, effectively integrating different specialized ones, to accommodate complex interactions and interdependency of biological and exposome processes. while non-specialized may not allow individual patient precision, such approach could still increase precision to specific cohorts rather than the whole population. For diseases with more circumscribed features, however, specialized digital twins might offer a targeted and streamlined alternative, allowing for precise intervention strategies and outcome predictions that might perform as well or better than those based on organoids. Another remarkable advantage of digital twins is that they allow a scenario-based modeling approach for actionable interventions – akin to strategies routinely used in epidemic modeling for policy decision-making <cit.>– that enhances their applicability and safety in clinical settings. This method avoids the standard pitfalls of an oracle-like predictive model by allowing for the exploration – via direct simulation – of multiple clinical scenarios, thereby providing a robust tool for decision support in personalized medicine. The integration with massive data sets about disease or treatment progressions, providing a reliable statistical samples that can be stratified to approximate the characterizing features of a patient, will be crucial to validate the output of models. Therefore, the expected output of such a machinery, model-informed and data-driven, would not just be a yes/no decision about the adoption of a therapeutic strategy or an intervention, but a whole spectrum of alternatives where advantages and disadvantages in adopting each plausible strategy are outlined to inform human decision-making. naturemag Author contributions. M.D.D. designed the research. M.D.D., L.A., G.C, V.dA., B.dC., L.R., J.R, R.S and F.Z wrote the manuscript. Competing interests. The authors declare no competing interests. Correspondence. Correspondence should be sent to <manlio.dedomenico@unipd.it>
http://arxiv.org/abs/2405.10054v1
20240516124236
A finite-sample generalization bound for stable LPV systems
[ "Daniel Racz", "Martin Gonzalez", "Mihaly Petreczky", "Andras Benczur", "Balint Daroczy" ]
cs.LG
[ "cs.LG", "cs.SY", "eess.SY", "68", "I.2.0" ]
Impact of medium temperature heat treatment on flux trapping sensitivity in SRF cavitiesThe work is partially supported by the U.S.Department of Energy, Office of Science, Office of High Energy Physics under Awards No. DE-SC 0009960 and The manuscript has been authored by by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. P. Dhakal1,2dhakal@jlab.org, B. D. Khanal2, E. Lechner1, and G. Ciovati1,2 1Thomas Jefferson National Accelerator Facility, Newport News, VA 23606, USA 2Center for Accelerator Science, Department of Physics, Old Dominion University, Norfolk, VA 23529, USA ===================================================================================================================================================================================================================================================================================================================================================================== empty empty One of the main theoretical challenges in learning dynamical systems from data is providing upper bounds on the generalization error, that is, the difference between the expected prediction error and the empirical prediction error measured on some finite sample. In machine learning, a popular class of such bounds are the so-called Probably Approximately Correct (PAC) bounds. In this paper, we derive a PAC bound for stable continuous-time linear parameter-varying (LPV) systems. Our bound depends on the H_2 norm of the chosen class of the LPV systems, but does not depend on the time interval for which the signals are considered. § INTRODUCTION LPV <cit.> systems are a popular class of dynamical systems in system identification and control theory <cit.>. Generally, LPV systems are linear in state, input and output signals, but the coefficients of these linear relationships depend on the scheduling variables. These systems are popular due to their ability to model highly non-linear phenomena while allowing much simpler theoretical analysis. In this work, we consider continuous-time LPV systems in state-space form, where the system matrices are affine functions of the scheduling variables. Contribution. We investigate the properties of stable LPV models from the statistical learning point of view and establish a generalization bound. The proof is a novel combination of ideas from control and statistical learning. To put our result into perspective, our starting point is a parametrized family of LPV systems. Furthermore, we consider datasets consisting triplets of signals representing the input, the scheduling signal and the corresponding output, all defined on the finite time interval [0,T]. The triplets are assumed to be sampled from an unknown distribution independently. For example, the data could be generated by an unknown, black-box LPV system, although we are agnostic regarding the origin of the dataset. In this context, the learning task refers to finding a concrete element of the parameterized family of LPV systems, which predicts as accurately as possible the true output for each input and scheduling variable coming from a dataset used for learning. However, the expected performance of a particular model w.r.t. the underlying distribution may significantly differ from the performance measured on either the training or any other finite dataset sampled from the same distribution. In statistical learning, the field of generalization theory analyzes these performance differences. Besides providing consistency guarantees for learning algorithms, this analysis is useful for many aspects of the learning process, such as model selection, validation or regularization. In this paper, we focus on a particular bound, referred to as PAC bound in the literature <cit.>, involving a model's expected performance and its performance on a finite sample via bounding their difference. Concretely, it is based on upper bounding the Rademacher complexity of the model family on the same finite sample. This type of bounds is one of the standard ways of establishing bounds on the generalization error of a model family and the bounds are uniform for all models belonging to the given parametrized family <cit.>. Our main contribution is an above mentioned PAC bound for LPV systems, which, to our knowledge, has been missing from the literature so far. The derived bound is of the magnitude O(c/√(N)), where c is a constant that depends on the maximal H_2 norm of the elements of the model family, but not on the length of the time interval [0,T]. Here, N is the cardinality of an arbitrary finite sample used to evaluate the performance of the model. Consequently, the difference between the expected performance and the performance measured on the finite sample converges to zero as N grows to infinity. Motivation: Control Theory. While control and system identification of LPV systems are well established, the existing theoretical guarantees for learning algorithms of LPV systems from the system identification literature tend to focus on the discrete-time case and they usually provide asymptotic guarantees only <cit.>. To the best of our knowledge, there are no finite-sample bounds for learning continuous-time LPV systems. The results of this paper are a first step towards that goal. To this end, we introduce a number of simplifying assumptions, such as independently sampled signals instead of a single long time series and no sampling time. Despite these assumptions, the problem remains non-trivial. Motivation: Machine Learning. The machine learning literature is rich in the use of continuous-time dynamical systems, e.g. continuous-time Recurrent Neural Networks (RNNs, <cit.>), Neural Controlled Ordinary Differential Equations (NCDEs, <cit.>) or structured State-Space Models (SSMs, <cit.>). These models often perform well compared to the state-of-the-art architectures, while exhibiting a higher level of robustness. Since LPV systems include bilinear systems <cit.> as a special case, in principle they could be used as universal approximators for sufficiently smooth, non-linear, continuous-time dynamical systems <cit.>, including important subclasses of RNNs and NCDEs. Finally, LPV systems include linear state-space models, which are crucial ingredients of SSMs, and subclasses of RNNs, hence PAC bounds for LPV systems are expected to be useful for PAC bounds for NCDEs, RNNs and SSMs. Related work. PAC bounds for discrete-time linear systems were explored in system identification in <cit.>, but not for LPV systems in state-space form and continuous-time. There are several PAC bounds available for continuous-time RNNs <cit.> which use Vapnik-Chervonenkis (VC) dimension. These bounds do not require stability, but are exponential in the integration time T. In <cit.> bounds for discrete-time RNNs were obtained via estimating the Rademacher complexity, but the bounds grow at least linearly with the number of time-steps. In <cit.> PAC bounds for NCDEs were derived using Rademacher complexity as well, but the bounds grow exponentially with the integration interval. The closest result to our work is <cit.>, which consider input-affine non-linear systems and propose a PAC bound based on Rademacher complexity. Again, as stability was not taken into account, the bound is exponential in the length of the integration interval. There is a rich literature on finite-sample bounds for learning discrete-time dynamical systems from time-series, e.g., <cit.>, but those papers consider different learning problems. Significance and novelty. The main novelty is that our error bound does not depend on the length of the integration interval T. This improvement over existing bounds, which grow with T, requires assuming a particular version of quadratic stability implying a finite H_2 norm. The result is important as dynamical systems are often used for making long-term predictions, in which case T tends to be large and the error bound becomes vacuous. Another technical advancement compared to <cit.> is the use of Volterra-kernels instead of Fliess-series expansion to estimate the Rademacher complexity and to represent the output responses as scalar products in a particular Hilbert space. Structure of the paper. Next, in Section <ref> we set notations and definitions related to the LPV system, then in Section <ref> we define the problem of generalization in case of LPV systems when we can only measure the error of the system on a finite set of sequences. In our case this set is a particular set of LPV systems, which meet the stability and other crucial conditions for having a time-invariant bound. We define these conditions in Section <ref>. We state our time-independent bound in Section <ref> for the generalization gap, the difference between the true error and the error measured on the finite set of sequences, based on the Rademacher complexity of the hypothesis set. We prove our bound in Section <ref> by showing that under our assumptions the LPV system has finite H_2 norm thus we may upper bound the Rademacher complexity of the system. Finally, in Section <ref> we consider systems and data where we may estimate the elements of the bound and show that the bound in these case are meaningful. § LPV SYSTEMS In this paper, we consider LPV state-space representations with affine dependence on the parameters (LPV-SSA), i.e. systems of the form Σẋ(t) = A(p(t)) x(t) + B(p(t))u(t),   x(0)=0 y_Σ(t)=C(p(t)) x(t) where x(t) ∈ℝ^n_x is the state vector, u(t) ∈ℝ^n_in is the input and y_Σ(t) ∈ℝ^n_out is the output of the system for all t ∈ [0, T], and n_x, n_in, n_out∈ℕ^+. The vector p(t) = (p_1(t), …, p_n_p(t))^T ∈ℙ⊆ℝ^n_p is the scheduling variable for n_p∈ℕ^+. The matrices of the system Σ are assumed to depend on (t) affinely, i.e. A(pu(t)) = A_0 + ∑_i=1^n_pp_i(t)A_i, B(pu(t)) = B_0 + ∑_i=1^n_pp_i(t)B_i and C(pu(t)) = C_0 + ∑_i=1^n_pp_i(t)C_i for matrices A_i ∈ℝ^n_x× n_x, B_i ∈ℝ^n_x× n_in and C_i ∈ℝ^n_out× n_x, i=0,…,n_p, which do not depend on time or the scheduling signal. We identify the LPV-SSA Σ with the tuple Σ=(A_i,B_i,C_i)_i=0^n_p. A solution of Σ refers to the tuple of functions (u, p, x, y_Σ), all defined on [0,T], such that x is absolutely continuous, y_Σ, u and p are piecewise continuous and they satisfy (<ref>). Note that the output y_Σ is uniquely determined by u and p, since the initial state x(0) is set to zero. To emphasize this dependence, we denote y_Σ by y_Σ(u,p). Thus, the scheduling signal p(t) behaves as an external input, too. For the sake of compactness we make a series of simplifications. First, as already stated, the initial state is set to zero. This is not a real restriction, as we consider stable systems for which the contribution of the non-zero initial state decays exponentially. Second, we work with systems with scalar output, i.e. let n_out = 1. Third, we assume that the scheduling variables take values in ℙ⊆ [-1,1]^n_p. This is a standard assumption in the literature <cit.> and it can always be achieved by an affine transformation, if the scheduling variables take values in a suitable interval. § LEARNING PROBLEM AND GENERALIZATION We now define the learning problem for LPV systems along the lines of classical statistical learning theory <cit.>. For this purpose, let us fix a time interval [0,T] and a set ℰ of LPV systems of the form (<ref>). Let 𝒰, 𝒫 and 𝒴 be sets of piecewise continuous functions defined on [0, T] and taking values in ℝ^n_in, ℙ and ℝ^n_out respectively. Hereinafter we use the standard terminology of probability theory <cit.>. Consider the probability space (𝒰×𝒫×𝒴, ℬ,𝒟), where ℬ is a suitable σ-algebra and 𝒟 is a probability measure on ℬ. For example, ℬ could be the direct product of the standard cylindrical Borel σ-algebras defined on the function spaces 𝒰,𝒫 and 𝒴. Let us denote by 𝒟^N the N-fold product measure of 𝒟 with itself. We use E_(u,p,y) ∼𝒟, P_(u,p,y) ∼𝒟, E_S ∼𝒟^N and P_S ∼𝒟^N to denote expectations and probabilities w.r.t. the measures 𝒟 and 𝒟^N respectively. The notation S ∼𝒟^N tacitly assumes that S ∈ (𝒰×𝒫×𝒴)^N, i.e. S is made of N triplets of input, scheduling and output trajectories. Intuitively, we think of S as a dataset of size N drawn randomly and independently from the distribution 𝒟. Consider a loss function ℓ: ℝ×ℝ→ [0,+∞), which measures the discrepancy between two possible output values. Some of the widespread choices are ℓ(a,b)=a-b or ℓ(a,b)=a-b^2. The learning objective is to find an LPV system Σ∈ℰ such that the true risk at time T, defined as ℒ(Σ) = 𝔼_(u, p, y) ∼𝒟[ℓ( y_Σ(u,p)(T),y(T))] is as small as possible. Since the distribution 𝒟 is unknown, minimizing the true risk is impossible. Therefore, the true risk is approximated by the empirical risk at time T w.r.t. a dataset S = {(u_i, p_i, y_i) }_1 ≤ i ≤ N, defined as ℒ_N^S(Σ) = 1/N∑_i = 1^Nℓ(_Σ(u_i, p_i)(T),y_i(T)). In practice, selecting an appropriate model is done by minimizing the empirical risk w.r.t. a so-called training dataset, while the trained model is usually evaluated by computing the empirical risk w.r.t. a separate test dataset. In both cases, we need a bound on the difference between the true and the empirical risk. Motivated by this argument, in this paper we prove a PAC bound, i.e. a probabilistic bound on the generalization gap, defined as sup_Σ∈ℰ( ℒ(Σ) - ℒ_N^S(Σ)). In the data, the scheduling signal may depend on or y, and the data may be generated by a quasi-LPV system. However, for the models to be learnt the scheduling signal acts as an input. This is consistent with the assumptions in system identification for LPV systems <cit.>. In contrast to system identification, here we assume the availability of independently sampled continuous-time signals. While this is not always realistic, it is still applicable in many scenarios, for instance, when training NCDEs <cit.> or structured state-space models <cit.>. Furthermore, deriving PAC bounds in this setting could be the first step towards deriving similar bounds for the single long time series case. § TECHNICAL PRELIMINARIES AND ASSUMPTIONS We start by presenting a Volterra-series representation of the output of an LPV system of the form (<ref>), which plays a central role in formulating and proving the main result. To this end, we introduce the following notation. Notation (Iterated integrals). Let Δ_k^t = {(τ_k,…,τ_1) | t ≥τ_k ≥…≥τ_1 ≥ 0 }⊂ℝ^k for any positive integer k. Moreover, let Δ_k^∞ = {(τ_k,…,τ_1) |τ_k ≥…≥τ_1 ≥ 0 }. Clearly, Δ_k^t⊆Δ_k^∞ for all t ∈ [0,+∞). For t ∈ℝ and τ = (τ_k,…,τ_1) we use the notation (t, τ) = (t, τ_k, …, τ_1). In addition, we use the following notation for iterated integrals (for any function f for which the integrals are well defined), for t ∈ [0,+∞] ∫_Δ_k^t f(τ) d τ≡∫_0^t∫_0^τ_k⋯∫_0^τ_2 f(τ_k,…,τ_1) d τ_1 ⋯ d τ_k. Let [n] = {1,…,n} and [n]_0 = {0,…,n} for all n ∈ℕ. Let I_k = [n_p]^k be the set of multi-indices of length k, so an element I of I_k is a tuple of the form I=(i_1,…,i_k). By slight abuse of notation, let I_0 be the singleton set {∅}. For a system Σ of the form (<ref>), for every I ∈ I_k, t ∈ [0,+∞), i_q,i_r ∈ [n_p]_0, and for every p ∈𝒫, u ∈𝒰 and λ≥ 0, we define the λ-weighted LPV Voltera-kernels w_i_q,i_r,I^λ: Δ_k+1^∞→ℝ^1 × n_in, the scheduling product p_I: Δ_k+1^T →ℝ and the λ-weighted scheduling-input product φ_i_q,i_r,I^λ: Δ_k+1^T →ℝ^n_in as follows. For k=0 and I=∅, for any τ=(τ_1) ∈Δ_1^t=[0,t], set w^λ_i_q, i_r, ∅() :=C_i_qe^A_0τ_1B_i_re^λ/2τ_1,   p_∅(τ):=1, φ_i_q,i_r,∅^λ(τ):= p_i_q(T)p_i_r(T-τ_1)u(T-τ_1)e^-λ/2τ_1. For k > 0, τ=(τ_k+1,…,τ_1) ∈Δ_k+1^t, t=∞ or t=T respectively and I=(i_1,…,i_k) set w_i_q, i_r, I^λ(τ) := C_i_q e^A_0 (τ_k+1-τ_k) A_i_ke^A_0(τ_k-τ_k-1)⋯ A_i_1e^A_0 τ_1B_i_re^λ/2τ_k+1, p_𝐈(τ):= ∏_j=1^k p_i_j(τ_j+T-τ_k+1), φ^λ_i_q,i_r,I(τ):= p_i_q(T)p_i_r(τ_k+1)_I(τ)u(T-τ_k+1) e^-λ/2τ_k+1. In the definition above, the value k is always equal to the size of the tuple I. The respective domains of these functions each contain tuples of size k+1 (denoted by τ). Intuitively, the λ-weighted LPV Volterra kernels represent the exponentially weighted Volterra kernels of certain bilinear systems, outputs of which determine the output of (<ref>). The λ-weighted scheduling-input products capture the polynomial relationship between the outputs of these bilinear systems and the scheduling and input signals. The weighting was introduced in order to make the series of the L^2 norms of these products square summable. The terms belonging to i_r and i_q are related to the effect of B_i_r and C_i_q on the output of the system (<ref>). The following Lemma captures this intuition in a rigorous way. For every (u,p) ∈𝒰×𝒫, λ≥ 0, the output of Σ at time T admits the following representation: y_Σ(u, p)(T) = ∑_i_q, i_r = 0^n_p∑_k=0^∞∑_I ∈ I_k∫_Δ_k+1^T w^λ_i_q, i_r, I(τ)φ^λ_i_q,i_r,I() dτ. The proof is based on a Volterra series expansion <cit.> and can be found in in Appendix A at The Lemma states that the output of Σ is an infinite sum of convolutions of the inputs with iterated integrals of the scheduling signal. This observation allows us to represent the output of an LPV system as a scalar product in a suitable Hilbert space which turns out to be the key for the proof of the main result. Next, we introduce the following stability assumption. There exists λ≥ n_p such that for any Σ∈ℰ of the form (<ref>) there exists Q ≻ 0 such that A_0^T Q + Q A_0 + ∑_i=1^n_p A_i^T Q A_i + ∑_i=1^n_p C_i^T C_i + C_0^T C_0 ≺ -λ Q. This assumption ensures quadratic exponential stability with Q being a quadratic Lyapunov function for any scheduling signal p. Moreover, as Lemma <ref> shows, it ensures the existence of an extension of the classical H_2 norm defined as Σ_λ,H_2^2 := ∑_i_q,i_r=0^n_p∑_k=0^∞∑_I ∈ I_k∫_Δ_k+1^∞w_i_q, i_r, I^λ(τ)_2^2 dτ. As it is customary for PAC bounds, we make some boundedness assumptions on the signals, also motivated by the previous lemma. Additionally, we make a standard assumption on the loss function <cit.>. We assume that sup_Σ∈ℰΣ_λ,H_2≤ c_ℰ. Additionally, we assume that 𝒰 and 𝒴 are such that for any u ∈𝒰 and y ∈𝒴, u_L^2([0, T],ℝ^n_in)≤ L_u and |y(T)| ≤ c_y. Lastly, we assume that the loss function ℓ is K_ℓ-Lipschitz-continuous, i.e. |ℓ(y_1,y_1')-ℓ(y_2,y_2')| ≤ K_ℓ(|y_1-y_2|+|y_1'-y_2'|) for all y_1,y_2,y_1',y_2' ∈ℝ, and ℓ(y,y)=0 for all y ∈ℝ. Let us denote by L^2([0,T],ℝ^n_in) the space of all measurable functions f:[0,T] →ℝ^n_in such that f^2_L^2([0,T],ℝ^n_in):=∫_0^T f(t)_2^2 dt is finite. Clearly, all u ∈𝒰 belong to this space. That is, we assume that the weighted H_2 norm of every element of the model class ℰ is bounded by c_ℰ, and that the L^2 norm of every input u, restricted to the interval [0, T], is bounded by L_u. The assumption on |y(T)| means the true labels are bounded. Note, that the square loss satisfies the assumption, if restricted to a bounded subset of ℝ^2. Now, we may state our crucial lemma that shows that the extended H_2 norm exists and the output of the system is bounded given that the previously stated assumptions hold. If Assumptions <ref> and <ref> hold with λ≥ n_p and Q ≻ 0, then for any Σ∈ℰ Σ_λ,H_2^2 ≤ ∑_i_r=0^n_ptrace (B_i_r^T Q B_i_r) < +∞. Additionally, for any u ∈𝒰, p ∈𝒫, we have |y_Σ(u,p)(T) | ≤ (n_p+1)Σ_λ,H_2u_L^2([0,T],ℝ^n_in). The proof can be found in Appendix B at Lemma <ref> states that any LPV system Σ∈ℰ has a finite λ-weighted H_2 norm Σ_λ,H_2. Note, that Σ_0,H_2 is just the classical H_2 norm <cit.> for LPV systems, and for bilinear systems Σ_0,H_2 coincides with the H_2 norm defined in <cit.>. Intuitively, the H_2 norm is an upper bound on the peak output for unit energy inputs <cit.>. The quantity Σ_λ,H_2 plays the same role, as shown in Lemma <ref>. By exploiting the lemma we state our main result, a PAC bound for stable LPV systems under the previous connditions in the next section. § MAIN RESULT We are ready to state our main contribution. Let c := 2 K_ℓmax{ L_u(n_p + 1)c_ℰ, c_y} and R(δ) := c (2 + 4 √(2log(4/δ)) ). Under Assumptions <ref> and <ref> , for any δ∈ (0,1), we have ℙ_𝐒∼𝒟^N(∀Σ∈ℰ: ℒ(Σ) - ℒ^S_N(Σ) ≤R(δ)/√(N)) ≥ 1 - δ. The probabilistic inequalities of the form above are referred to as PAC bounds in the literature <cit.>. Using PAC bounds for learning. Any learning algorithm maps a dataset S to a model Σ̂=Σ̂(S). As a PAC bound holds uniformly on all models, with probability at least 1-δ over S, ℒ(Σ̂) ≤ℒ_N^S(Σ̂) + R(δ)/√(N). Since the empirical error ℒ_N^S(Σ̂) can be computed from the data, we get an explicit high-probability bound on the true error ℒ(Σ̂) of the LPV system returned by any learning algorithm. Prediction error minimization vs. parameter estimation. Minimizing the prediction error is not the same as estimating a true parameter. However, for linear systems, the true error can be used to bound the parameter estimation error under suitable identifiability assumptions <cit.>. For LPV systems, this problem requires further research, but we expect similar results. Discussion on the bound: dependence on N and T. The bound in Theorem <ref> tends to zero as N grows to infinity and is also independent of the integration time T, a consequence of assuming stability of the models. The latter is a significant improvement compared to prior work <cit.>. We conjecture that some form of stability is also necessary for time-independent bounds, as intuitively in case of unstable systems small modelling errors may lead to a significant increase of the prediction error in the long run. Furthermore, the bound grows linearly with the maximal H_2 norm of the elements of ℰ, with the maximal possible value of the true outputs, and with the maximal energy of the inputs. Multiple output. Sofar we constrained our analysis to single output. At the expense of higher bound we may extend our result for multiple outputs, see Appendix D for details. Let c := 2 K_ℓmax{ L_u(n_p + 1)c_ℰ, c_y} and R(δ) := c (2 + 4 √(2log(4/δ)) ). Under Assumptions <ref> and <ref> , for any δ∈ (0,1), we have ℙ_𝐒∼𝒟^N(∀Σ∈ℰ: ℒ(Σ) - ℒ^S_N(Σ) ≤R(δ)/√(N)) ≥ 1 - δ. The proof can be found in Appendix C at In the next section we prove our main theorem. § PROOF OF THEOREM <REF> The main component of the proof is the estimation of the Rademacher complexity of the class of LPV systems. <cit.> The Rademacher complexity of a bounded set 𝒜⊂ℝ^m is defined as R(𝒜) = 𝔼_σ[sup_a ∈𝒜1/m∑_i = 1^mσ_i a_i ], where the random variables σ_i are i.i.d such that ℙ[σ_i = 1] = ℙ[σ = -1] = 0.5. The Rademacher complexity of a set of functions ℱ over a set of samples S = {s_1… s_m} is defined as R_S(ℱ) = R({(f(s_1),…,f(s_m)) | f ∈ℱ}). Intuitively, Rademacher complexity measures the richness of a set of functions, see e.g. chapter 26 in <cit.>, and can be used for deriving PAC bounds <cit.> for general models. Below we restate this result for LPV systems. Let L_0(T) denote the set of functions of the from (u,p,y) ↦ℓ((u,p)(T),y(T)) for Σ∈ℰ. Let B(T) be such that the functions from L_0(T) all take values from the interval [0,B(T)]. Then for any δ∈ (0, 1) we have ℙ_S ∼𝒟^N(∀Σ∈ℰ: ℒ(Σ) - ℒ^S_N(Σ) ≤ R_S^T, N, δ) ≥ 1 - δ where R_S^T, N, δ = 2R_S(L_0(T))+ 4B(T)√(2 log (4/δ)/N). The proof of Theorem <ref> follows from Theorem <ref>, by first bounding the Rademacher complexity of R_S(L_0(T)) and then bounding the constant B(T). Step 1: Showing R_S(L_0(T)) ≤c/√(N). Consider the class ℱ of output response functions (u,p) ↦(u,p)(T) for Σ∈ℰ and the corresponding Rademacher complexity R_𝐒(ℱ). By <cit.> and Assumption <ref>, R_S(L_0(T)) ≤ K_ℓ R_S(ℱ), hence it is enough to bound R_S(ℱ). For the latter, we need the following Lemma. There exists a Hilbert space ℋ such that for every Σ∈ℰ there exists ^T, Σ∈ℋ and for every (u,p) ∈𝒰×𝒫 there exists φ^T,u,p∈ℋ, such that y_Σ(u, p)(T) = ⟨^T, Σ,φ^T, ,⟩_ℋ, and φ^T, u, p_ℋ≤ L_u (n_p + 1) and ^T, Σ_ℋ≤Σ_λ, H_2. Proof. Let 𝒱 be the vector space consisting of sequences of the form f={f_i_q,i_r,I| I ∈ I_k; i_q, i_r ∈ [n_p]_0}_k=0^∞ such that f_i_q,i_r,I:Δ_k+1^T↦ℝ^1 × n_in is measurable. For any f,g ∈𝒱 let us define the series ⟨ f, g ⟩ = ∑_i_q,i_r=0^n_p∑_k=0^∞∑_I ∈ I_k∫_Δ_k+1^T f_i_q,i_r,I(τ) g_i_q,i_r,I(τ)^T dτ. and for any f ∈𝒱, let us denote by f^2=⟨ f,f ⟩. Let ℋ consists of those element f ∈𝒱 for which the series f^2 is convergent. Then for any f,g ∈ℋ, the series ⟨ f,g ⟩ is absolutely convergent. Let us denote its limit by ⟨ f,g ⟩_ℋ. Then ℋ is a Hilbert-space with the scalar product ⟨·, ·⟩_ℋ, and we denote by ·_ℋ the corresponding norm. Let λ≥ n_p be such that Assumption <ref> is satisfied. Let ^T,Σ = {(_i_q,i_r,I^λ)|_Δ_k+1^T| I ∈ I_k; i_q, i_r ∈ [n_p]_0}_k=0^∞ φ^T,u, p={(φ^λ_i_q, i_r, I)^T | I ∈ I_k; i_q, i_r ∈ [n_p]_0}_k=0^∞. We show that ^T, Σ∈ℋ and φ^T, , ∈ℋ by proving that ^T, Σ_ℋ^2 and φ^T, , _ℋ^2 are finite and bounded as claimed in the Lemma. By the definition of w^T,Σ it is clear that ^T, Σ_ℋ^2 ≤Σ_λ, H_2^2. As to φ^T, u, p, due to taking values in [-1,1]^n_p, φ^λ_i_q, i_r, I(τ)_2^2 ≤u(T-τ_k+1)_2^2 e^-λτ_k+1, for any I ∈ I_k, τ=(τ_k+1,…,τ_1) ∈Δ_k+1^T, k ≥ 0. Hence, by setting ∫_Δ_k^t d=1 for k=0 and any t > 0, we obtain φ^T, u, p_ℋ^2 = ∑_i_q,i_r = 0^n_p∑_k = 0^∞∑_I ∈ I_k∫_Δ_k+1^Tφ^λ_i_q, i_r, I(τ)_2^2 d ≤ (n_p + 1)^2 ∫_0^Tu(T-t)_2^2 ^-λ t(∑_k = 0^∞∑_I ∈ I_k∫_Δ_k^t d τ) dt ≤ (n_p + 1)^2 ∫_0^T u(T-t)_2^2 e^t(n_p-λ) dt ≤ (n_p + 1)^2 u_L^2([0,T], ℝ^n_in)^2 < (n_p+1)^2 L^2_u < +∞. The last inequality follows from the well-known upper bound ∫_Δ_k^t d τ≤t^k/k ! for iterated integrals <cit.>, leading to ∑_k=0^∞∑_I ∈ I_k∫_Δ_k^t d τ ≤∑_k=0^∞n_p^k t^k/k !=e^n_p t, as well as the choice of λ≥ n_p and Assumption <ref>. Finally, by Lemma <ref>, y_Σ(u, p)(T) = ⟨^T, Σ, φ^T, u, p⟩_ℋ. Using Lemma <ref> and <cit.> we have R_S(ℱ) ≤c_ℰ L_u(n_p + 1)/√(N) yielding to R_S(L_0(T)) ≤ K_ℓR_S(ℱ) ≤c/√(N). Step 2: Bounding B(T). By Assumption <ref>, we have |ℓ((u, p)(T), y(T))| ≤ 2K_ℓmax{|(u, p)(T)|,|y(T)|} ≤ 2K_ℓmax{L_u c_ℰ, c_y}≤ c The last inequality follows from applying Lemma <ref> and Assumption <ref> along with n_p≥ 1. Finally, the proof of Theorem <ref> follows from the bounds obtained in Step 1. and Step 2. together with Theorem <ref>. § EXPERIMENTS We considered a parametrized family ℰ of LPV systems of the form (<ref>), where n_p=1 and A(p(t))=[ -1/τ a_12p(t); 1 -1/τ ], B(p(t))=[ b_1; 0 ], C(p(t))=[ 0 1 ], where τ∈ [], a_12∈ [], § CONCLUSION In this paper we examined LPV systems within the confines of statistical learning theory and derived a PAC bound on the generalization error under stability conditions. The central element of the proof is the application of Volterra series expansion in order to upper bound the Rademacher complexity of LPV systems. Further research is directed towards extending these methods to more general models, possibly exploiting the powerful approximation properties of LPV systems. plain -12cm § APPENDIX A: PROOF OF LEMMA <REF> Consider the following bilinear system for all i_q, i_r ∈ [n_p] for a fixed t ∈ [0, T]. ṡ(τ) = (A_0 + λ/2I + ∑_i=1^n_pp_i(τ+T-t) A_i) s(τ), y^s,t(τ) = C_i_q s(τ),   s(0) = B_i_r u(T-t)e^-λ/2t. From the Volterra series representation <cit.> of bilinear systems we have y^ s,t_i_q, i_r(τ) = [w_i_q,i_r,∅(τ) + ∑_k = 1^∞∑_𝐈∈ I_k∫_Δ_k^τ_i_q,i_r,𝐈^λ,τ() p_𝐈() d] u(T-t)e^- λ/2 t where p_I is as in Section <ref> and w_i_q,i_r,I^λ,τ()= w_i_q,i_r,I^λ((τ,)), for all I ∈ I_k and ∈Δ_k^τ, k >0. By <cit.> (u, p)(T)=∫_0^T C(p(T))Φ(T,T-t)B((T-t))u(T-t)dt where Φ(s,s_0) is the fundamental matrix of ż(s)=A(p(s))z(s). Since the fundamental matrix Φ_λ(s,s_0) of ż(s)=(A(p(s))+λ/2 I)z(s) satisfies Φ_λ(s,s_0)=e^λ/2(s-s_0)Φ(s,s_0), we have s(τ)=Φ_λ(T-t+τ,T-t)B_i_r u(T-t)e^-λ/2t = e^λ/2 (t-τ)Φ(T-t+τ,T-t)B_i_r u(T-t). Then from the definition of C(p(T)) and B(p(T-t)), (u, p)(T) = ∫_0^T∑_i_q, i_r = 0^n_p p_i_q(T)p_i_r(T-t) y_i_q, i_r^s,t(t) dt. Applying the Volterra expansion above together with { (t,) | t ∈ [0,T], ∈Δ_k^t}=Δ^T_k+1 yields the result. § APPENDIX B: PROOF OF LEMMA <REF> If Assumption <ref> holds, then A_0^T Q + Q A_0 ≺ -λ Q, and hence A_0+λ I is Hurwitz. Let S= A_0^T Q + Q A_0 + ∑_i=1^n_p A_i^T Q A_i + ∑_i=1^n_p C_i^T C_i + C_0^T C_0+λ Q. Then S ≺ 0 and hence S=-VV^T for some V. Define C̃=[ C_0^T … C_n_p^T V^T ]^T, Ã=(A+0.5λ I) and N_i=A_i, i ∈ [n_p]. Then Ã^T Q + Q Ã+∑_i=1^n_p N_i^T Q N_i + C̃^TC̃=0 and hence, by <cit.>, for any choice of the matrix G=[ G_1 … G_n_p ], the bilinear system S(G) {[ ż(t) = Ãz(t)+∑_i=1^n_p(N_i z(t) p_i(t) + G_i p_i(t) ); ỹ(t) = C̃z(t),   z(0)=0, ]. has a finite H_2 norm S(G)_H_2 which satisfies S(G)_H_2^2=trace(G^T Q G), and which is defined via Volterra kernels as follows. For each i ∈ [n_p], let g_i,∅^G(t)=C̃ e^Ãt G_i and for every I ∈ I_k, k > 0, let g_i, I^G(t) =C̃ e^Ã t_k+1 N_i_k e^Ã t_k⋯ e^Ãt_2 N_i_1 e^Ã t_1G_i, t=(t_k+1,…,t_1) ∈ℝ^k, t ∈ℝ. Then S(G)_H_2^2 = ∑_i=1^n_p∑_k=0^∞∑_I ∈ I_k∫_[0,+∞)^k+1g_i,I^G(t)_2^2dt. Let G^i,j be the matrix of which the first column is the jth column of B_i and all the other elements are zero, i ∈ [n_p], j ∈ [n_in]. By choosing G=G^i,j, it follows that the jth row of w_i_q, i_r, I^λ(τ) is the (i_q+1)-th row of g^G^i,j_i_r,I(t) where τ∈Δ_k+1^∞ and t=(τ_k+1-τ_k,…,τ_2-τ_1,τ_1), i_q,i_r ∈ [n_p]_0, I ∈ I_k, k≥ 0. Hence, ∑_j=1^n_in g_i_r,I^G^i_r,j(τ)_2^2 ≥∑_i_q=0^n_pw_i_q, i_r, I^λ(τ)_2^2, and by applying a change of variables in the iterated integrals, ∑_i_q,i_r=0^n_p∑_k=0^∞∑_I ∈ I_k∫_Δ_k+1^tw^λ_i_q, i_r, I(τ)^2_2 dτ≤ ∑_i_r=0^n_p∑_j=1^n_inS(G^i_r,j)_H_2^2 = ∑_i_r=0^n_ptrace(B_i_r^T Q B_i_r). In the last step we used the fact that ∑_j=1^n_intrace((G^i_r,j)^T Q G^i_r,j)=trace(B_i_r^T Q B_i_r) Finally, from Lemma <ref> it follows that y_Σ(u,p)(T)= ⟨^T, Σ, φ^T, u, p⟩_ℋ for a suitable Hilbert space. From the definition of ^T,Σ and φ^T, u, p and the proof of Lemma <ref> it follows that ^T,Σ_ℋ≤Σ_λ,H_2, φ^T,u, p_ℋ≤ (n_p+1) u_L^2([0,T],ℝ^n_in) and hence by the Cauchy-Schwartz inequality the result follows. § APPENDIX C: PROOF OF THEOREM <REF> Let use assume that the loss functions satisfies the following condition: ℓ(y,ỹ)=∑_j=1^pℓ_i(y^j,ỹ^j), where y^j,ỹ^j denote the jth component of the vectors y^j, ỹ^j ∈ℝ^p respectively and ℓ_j are Lipschitz functions with Lipschitz constant K_ℓ. For instance, if ℓ(y,ỹ)=y-y^'_1 is the ℓ_1 loss, or ℓ(y,y^')=y-y^'_2^2 is the classical quadratic loss and y,y^' are bounded, then this assumption is satisfied. For all Σ∈ℰ, let Σ_j be the LPV system which arises from Σ by considering only the jth output, and let ℰ_j be the class of LPV systems formed by all Σ_j, Σ∈ℰ_j. For every j=1,…, p, consider the data set 𝐒_j = {(_i, _i, y_i^j) }_1 ≤ i ≤ N, which is obtained from the data set 𝐒 by taking only the jth component of the true outputs {y_i}_i=1^n. Then it follows that ℒ(Σ)=∑_j=1^p ℒ(Σ_j), where ℒ(Σ_j)=𝔼_(, , y) ∼𝒟[ℓ_j( y_Σ_j(,)(T),y^j(T))] and ℒ_N^𝐒(Σ)=∑_j=1^p ℒ_N^𝐒_j(Σ_j). By applying the main theorem of the paper to 𝐒_j and ℰ_j, j=1,…,p, it follows that ℙ_𝐒∼𝒟^N(∀Σ_j ∈ℰ_j: ℒ(Σ_j) - ℒ^𝐒_j_N(Σ_j) ≤R_jδ)/√(N)) ≥ 1 - δ. for all j=1,…, p. Then by using the union bound it follows that ℙ_𝐒∼𝒟^N(∀ j=1,…,p,  ∀Σ_j ∈ℰ_j: ℒ(Σ_j) - ℒ^𝐒_j_N(Σ_j) ≤max_1 ≤ j ≤ p R_j(δ)/√(N)) ≥ 1 - pδ. and by using ℒ(Σ)=∑_j=1^p ℒ(Σ_j) and ℒ_N^𝐒(Σ)=∑_j=1^p ℒ_N^𝐒_j(Σ_j) it then follows that ℙ_𝐒∼𝒟^N(∀Σ∈ℰ: ℒ(Σ) - ℒ^𝐒_N(Σ) ≤max_1 ≤ j ≤ p R_j(δ)/√(N)) ≥ 1 - pδ.
http://arxiv.org/abs/2405.08769v1
20240514170210
Scalar Field Perturbation of Hairy Black Holes in EsGB theory
[ "Young-Hwan Hyun", "Boris Latosh", "Miok Park" ]
hep-th
[ "hep-th", "gr-qc" ]
andtable#1 #2 &   USletter 2cm1cm2cm1.75cm
http://arxiv.org/abs/2405.09036v1
20240515020507
Construction of special Lagrangian submanifolds of the Taub-NUT manifold and the Atiyah-Hitchin manifold
[ "Masato Arai", "Kurando Baba" ]
math-ph
[ "math-ph", "hep-th", "math.MP" ]
May, 2024 YGHP-24-04 0.5cm Construction of special Lagrangian submanifolds of 0.8mm the Taub-NUT manifold and the Atiyah-Hitchin manifold 1.0cm Masato Arai^a[arai(at)sci.kj.yamagata-u.ac.jp] and Kurando Baba^b[kurando.baba(at)rs.tus.ac.jp] 0.5cm ^aFaculty of Science, Yamagata University, Kojirakawa-machi 1-4-12, Yamagata, Yamagata 990-8560, Japan 0.5cm ^bDepartment of Mathematics, Faculty of Science and Technology, Tokyo University of Science, Noda, Chiba, 278-8510, Japan 2cm We construct special Lagrangian submanifolds of the Taub-NUT manifold and the Atiyah-Hitchin manifold by combining the generalized Legendre transform approach and the moment map technique. The generalized Legendre transform approach provides a formulation to construct hyperkähler manifolds and can make their Calabi-Yau structures manifest. In this approach, the Kähler 2-forms and the holomorphic volume forms can be written in terms of holomorphic coordinates, which are convenient to employ the moment map technique. This technique derives the condition that a submanifold in the Calabi-Yau manifold is special Lagrangian. For the Taub-NUT manifold and the Atiyah-Hitchin manifold, by the moment map technique, special Lagrangian submanifolds are obtained as a one-parameter family of the orbits corresponding to Hamiltonian action with respect to their Kähler 2-forms. The resultant special Lagrangian submanifolds have cohomogeneity-one symmetry. To demonstrate that our method is useful, we recover the conditions for the special Lagrangian submanifold of the Taub-NUT manifold which is invariant under the tri-holomorphic U(1) symmetry. As new applications of our method, we construct special Lagrangian submanifolds of the Taub-NUT manifold and the Atiyah-Hitchin manifold which are invariant under the action of a Lie subgroup of SO(3). In these constructions, our conditions for being special Lagrangian are expressed by ordinary differential equations (ODEs) with respect to the one-parameters. We numerically give solution curves for the ODEs which specify the special Lagrangian submanifolds for the above cases. § INTRODUCTION In the context of Riemannian geometry, Harvey and Lawson introduced a special class of minimal submanifolds called calibrated submanifolds <cit.> (see also <cit.>). Calibrated submanifolds have attracted attention and consequently their research have been conducted both in fields of mathematics and physics. In physics, they have been studied in the context of intersecting branes (see e.g. <cit.> for a review and references). One interesting calibrated submanifold is a special Lagrangian submanifold. Special Lagrangian submanifolds are defined in a Calabi-Yau manifold, that is, submanifolds calibrated by the real part of the holomorphic volume form on the Calabi-Yau manifold. Special Lagrangian submanifolds also play an important role in physics. For example, in string theory, it is expected that explicit construction of special Lagrangian submanifolds may yield precise understanding of mirror symmetry <cit.>. In field theory, it is shown that it appears in moduli space of a domain wall solution, which is one type of topological solitons, in the nonlinear sigma model on the cotangent bundle T^* CP^n over complex projective space CP^n <cit.>. Since the notion of special Lagrangian submanifolds was introduced, there have been many developments concerning their construction. Special Lagrangian submanifolds are given by the conditions which are constraints for coordinates of the Calabi-Yau manifolds. Then, the condition for a submanifold to be special Lagrangian is expressed as a (nonlinear) partial differential equation. Such an equation is difficult to solve in general. To overcome this difficulty, we employ a method with a Hamiltonian group action, which is called a moment map technique introduced by Joyce <cit.>. By this technique, we can construct special Lagrangian submanifolds which are invariant under the Hamiltonian group action on the ambient Calabi-Yau manifold. Each special Lagrangian submanifold is obtained as a one-parameter family of the orbits for this group action. The resultant orbits in the submanifold are of codimension one, so that such special Lagrangian submanifolds are said to be cohomogeneity one. The advantage of the moment map technique is that the condition to be special Lagrangian is reduced to an ordinary differential equation (ODE) with respect to the one-parameter, which can be solved. With the use of the moment map technique, Joyce constructed special Lagrangian submanifolds in 𝐂^n(≃ T^*𝐑^n). In this case, the resultant submanifold is cohomogeneity-one special Lagrangian submanifolds which are invariant under Lie subgroups of U(n). After Joyce's work, the moment map technique was applied to construction of special Lagrangian submanifolds in the cotangent bundles over compact rank one Riemannian symmetric spaces whose Calabi-Yau structure was given by Stenzel <cit.>. As examples of this class, cohomogeneity-one special Lagrangian submanifolds were studied in the cotangent bundle T^*S^n over the sphere S^n <cit.> and in T^* CP^n over the complex projective space CP^n <cit.>. The construction of special Lagrangian submanifolds in the cotangent bundle over a compact Riemannian symmetric space with general rank by using the moment map technique has been studied in <cit.>. The manifolds such as T^* CP^n explained above are hyperkähler manifolds, which are special types of Calabi-Yau manifolds. Especially, n=1 case of T^* CP^n is called Eguchi-Hanson manifold <cit.>, which is a self- dual solution of the four-dimensional Euclidean Einstein equation. This type of solutions falls into the so-called Gibbons-Hawking type <cit.>, whose metric is given as ds^2=V(r⃗) dr⃗· dr⃗+V^-1(dφ+A⃗· dr⃗)^2 , ∇⃗×A⃗=∇⃗V , where φ, r⃗ are coordinates of a four-dimensional hyperkähler manifold, A⃗ is a potential. The Gibbons-Hawking type includes the Taub-NUT manifold <cit.> and the Atiyah-Hitchin manifold <cit.>. Since the manifold of the Gibbons-Hawking type is also Calabi-Yau manifold, it is possible to construct their special Lagrangian submanifolds. Indeed, by using the moment map technique, special Lagrangian submanifolds are constructed in the case when the Calabi-Yau manifold is T^* CP^n with n=1 (as well as n ≠ 1) <cit.> and the Taub-NUT manifold <cit.>. In <cit.>, special Lagrangian submanifolds invariant under tri-holomorphic U(1) isometry, which acts on the manifold Hamiltonianly, are constructed. In construction, non-holomorphic coordinates are used in order to describe the Calabi-Yau structure on the Taub-NUT manifold and the moment map corresponding to the tri-holomorphic U(1)-isometry. By combining this description and the moment map technique, the condition <cit.> for the U(1)-invariant special Lagrangian submanifolds in the Taub-NUT manifold are derived. On the other hand, since the Taub-NUT manifold also has a Lie subgroup of the non-tri-holomorphic SO(3) isometry, SO(2), which acts on the manifold Hamiltonianly. Therefore, it is possible to construct special Lagrangian submanifolds invariant under that isometry. The Atiyah-Hitchin manifold also has the same isometry, so that special Lagrangian submanifolds invariant under the SO(2) isometry can be constructed. However, it appears to be difficult to apply the method in <cit.> to construct the SO(2)-isometry invariant special Lagrangian submanifolds in the Taub-NUT manifold as well as in the Atiyah-Hitchin manifold. In order to construct the special Lagrangian submanifolds invariant under the SO(2)-isometry by using the moment map technique, a most useful way is to write the geometrical quantities such as Kähler potential in terms of holomorphic coordinates. For this purpose, one possible approach is to use the generalized Legendre transform approach, which is known as an alternative formulation to construct hyperkähler manifolds <cit.>. An advantage of this approach is that the complex structure is manifest, meaning that geometrical quantities such as the Kähler potential of the hyperkähler manifold are expressed by holomorphic coordinates. In this approach, a Kähler potential is constructed from the contour integration of one function with holomorphic coordinates. This integration is called the F-function (see (<ref>)). The F-functions of the Eguchi-Hanson manifold and the Taub-NUT manifold are given in <cit.>. The first study of the Atiyah-Hitchin manifold by using the generalized Legendre transform approach has been done in <cit.>. Later, by the same approach, the Atiyah-Hitchin manifold has been restudied in <cit.>. However, there is a discrepancy between <cit.> and <cit.>: The contour of the integration in the F-function chosen in <cit.> is different from one of <cit.>. A different contour may give a different form of the metric of the Atiyah-Hitchin manifold. In <cit.>, we have shown that the contour of the integration in the F-function proposed in <cit.> is correct. We have proven that the contour in <cit.> gives a real Kähler potential which is consistent with its definition while the contour in <cit.> gives a complex Kähler potential. In <cit.>, we have calculated the Kähler potential and the metric in terms of holomorphic coordinates with the contour in <cit.>, which are not derived in <cit.>. To utilize the moment map technique, we also need isometric actions of the SO(2) which are Hamiltonian with respect to the Kähler 2-form obtained from the Kähler potential in <cit.>. They can be given based on the result of the Kähler metric which we obtained in <cit.>. In this paper, we construct special Lagrangian submanifolds of the Taub-NUT manifold and the Atiyah-Hitchin manifold by combining the generalized Legendre transform approach and the moment map technique. In order to demonstrate that our method is useful, first we rederive cohomogeneity-one special Lagrangian submanifolds of the Taub-NUT manifold invariant under the tri-holomorphic U(1)-isometry. We also apply our method for construction of cohomogeneity-one special Lagrangian submanifolds of the Taub-NUT manifold invariant under a Lie subgroup of the SO(3)-isometry, SO(2). Finally we construct a cohomogeneity-one special Lagrangian submanifold of the Atiyah-Hitchin manifold which is invariant under the SO(2)-isometry, by using our method. In these constructions, we explicitly derive the conditions which give special Lagrangian submanifolds. As mentioned above, the resultant special Lagrangian submanifold is obtained as a one-parameter family of the orbits and the corresponding condition to be special Lagrangian are expressed by a certain ODE for the parameter. We solve the ODE numerically and plot solution curves which specify the special Lagrangian submanifolds for the above cases. The organization of this paper is as follows. In Section <ref>, we review the generalized Legendre transform approach and provide the F-functions of the Taub-NUT manifold and the Atiyah-Hitchin manifold. We derive corresponding Kähler metrics in terms of holomorphic coordinates, which are necessary for construction of special Lagrangian submanifolds. The calculation of the metric of the Atiyah-Hitchin manifold is based on our paper <cit.>. In Section <ref>, we recall the definition of special Lagrangian submanifolds in a Calabi-Yau manifold and explain the relation to the calibrated submanifolds. We also review the basics of the moment map technique to construct special Lagrangian submanifolds in a Calabi-Yau manifold equipped with a Hamiltonian group action. We apply the method explained in Section <ref> for construction of special Lagrangian submanifolds of the Taub-NUT manifold in Section <ref>. We construct them invariant under the tri-holomorphic U(1) symmetry and a Lie subgroup of SO(3)-isometry, SO(2). In Section <ref>, we construct a special Lagrangian submanifold of the Atiyah-Hitchin manifold invariant under the SO(2)-isometry. Section <ref> is devoted to summary and conclusion. § THE GENERALIZED LEGENDRE TRANSFORM §.§ Brief review of the generalized Legendre transform We briefly explain the generalized Legendre transform construction of hyperkähler manifold <cit.>. We start with a polynomial η^(2j)=z̅ζ^j+v̅ζ^j-1+t̅ζ^j-2+⋯ +x +(-)^j(⋯ + t ζ^j-2-v ζ^j-1+zζ^j) , where z, t, ⋯, x are holomorphic coordinates and ζ is the coordinate of the Riemann sphere ℂP^1=S^2. This polynomial is called an O(2j)-multiplet. Eq. (<ref>) should obey the reality condition η^(2j)(-1/ζ̅)=η^(2j)(ζ) . The Kähler potential for a hyperkähler manifold is constructed from a function with η^(2j): F=∮_C dζζG(η^(2j)) , where G is an arbitrary holomorphic (possibly single or multi-valued) function and the contour C is chosen such that the result of the integration is real. We call (<ref>) the F-function. The F-function satisfies the following set of second order differential equations F_zz̅=-F_vv̅=F_tt̅=⋯ =(-)^jF_xx , F_zv̅=-F_vt̅=⋯ , F_zt=F_vv etc , F_zv=F_vz etc , where F_zz̅≡∂ F^2 ∂ z ∂z̅ , etc. The Kähler potential can be constructed from the F-function by performing a two dimensional Legendre transform with respect to v and v̅ K(u,u̅,z,z̅)=F(z,z̅,v,v̅,t,t̅,⋯,x)-uv-u̅v̅ , together with the extremizing conditions ∂ F ∂ v=u , ∂ F ∂ t=⋯=∂ F ∂ x=0 . These equations tell us that v, v̅, t, t̅,⋯,x are implicit functions of z,z̅,u,u̅. Considering that fact, differentiating (<ref>) and (<ref>) with respect to z gives F_zb+∂ a ∂ zF_ab=0 , where a, b run over v,v̅,t,t̅,⋯,x and summation over repeated indices is assumed. Eq. (<ref>) yields ∂ a ∂ z=-F^abF_bz , where we have used F_zb=F_bz and F^ab is the inverse matrix of F_ab. On the other hand, differentiating (<ref>) with respect to u, we have ∂ a ∂ u=F^av . Eqs. (<ref>) and (<ref>) are used to derive the Kähler metric in terms of derivatives of F with respect to the holomorphic coordinates. Taking the derivatives of (<ref>) with respect to z and u, we obtain ∂ K ∂ z=∂ F ∂ z , ∂ K ∂ u=-v . Further taking the derivatives of (<ref>) and (<ref>) and using (<ref>) and (<ref>), we have the Kähler metric as K_zz̅=F_zz̅-F_zaF^abF_bz̅ , K_zu̅=F_zaF^av̅ , K_uz̅=F^vaF_az̅ , K_uu̅=-F^vv̅ . §.§ Calabi-Yau structure in the generalized Legendre transform We show that the Kähler metric (<ref>)-(<ref>) for the four-dimensional case, which is our main focus in this paper, equips the Calabi-Yau structure. First we recall the definition of an almost Calabi-Yau manifold. It is a quadruple (M, J, ω, Ω) such that (M, J, ω) is a Kähler manifold M of complex dimension n(≥ 2) with a complex structure J and a kähler 2-form ω and a non-vanishing holomorphic (n,0)-form Ω on M. If ω and Ω satisfies the relation ω^n n!=(-1)^n(n-1)/2(i 2)^n Ω∧Ω̅ , the (M, J, ω, Ω) is called a Calabi-Yau manifold. In the following, we consider n=2 case. In this case, ω and Ω are written in terms of holomorphic coordinates (u,z) as ω =i2{ K_uu̅du∧ du̅ +K_uz̅du∧ dz̅ +K_zu̅dz∧ du̅ +K_zz̅dz∧ dz̅} , and Ω=du∧ dz. We then find the relation between ω and Ω as ω^2 = 12([ K_uu̅ K_uz̅; K_zu̅ K_zz̅ ])du∧ dz ∧ du̅∧ dz̅ =12Ω∧Ω , where we have used the Monge-Ampère equation <cit.>, which holds for any 4-dimensional hyperkähler manifold, ([ K_uu̅ K_uz̅; K_zu̅ K_zz̅ ]) =1 . (<ref>) is the condition for the Calabi-Yau manifold for the four-dimensional case and it is seen that the hyperkähler manifold is a special case for the Calabi-Yau manifold. §.§ Taub-NUT manifold In this subsection, we review the construction of the Taub-NUT manifold <cit.> in the generalized Legendre transform. First we introduce 𝒪(2)-multiplet η^(2) =z̅ζ+x-zζ=-zζ(ζ-ζ_+)(ζ-ζ_-) , where ζ_±=x± r2z , r^2= x^2+4|z|^2 . By using this multiplet, the F-function of the Taub-NUT manifold is given by F(x,z,z̅)=-12π i h∮_Γ_0dζζ(η^(2))^2 -2m2π i∮_Γdζζη^(2)lnη^(2) , where h and m are constants, the latter of which is called a NUT charge. The Γ_0 is a contour encircling the origin in the ζ-plane counterclockwise. The contour Γ is taken as in Fig. <ref>. We can find slightly different forms of F-functions for the Taub-NUT manifold in <cit.>. The difference of the F-functions between (<ref>) and them comes from the expression of the 𝒪(2)-multiplet (<ref>). In what follows, we will show that the Kähler metric obtained from our F-function (<ref>) gives a well-known form of the Taub-NUT metric. The Kähler potential is obtained by the Legendre transform (<ref>). Explicitly it is given by K( u,u̅, z,z̅) =F(x, z,z̅)-(u+u̅)x . The extremizing condition (<ref>) yields ∂ F∂ x=u+u̅ . Here, (u,z) gives a holomorphic coordinate on the Taub-NUT manifold. The Kähler metric can be calculated from (<ref>): K_zz̅ = -F_xx-F_xzF^-1_xxF_xz̅ , K_zu̅ = F_xx^-1F_xz , K_uz̅ = F_xx^-1F_xz̅ , K_uu̅ = -F_xx^-1 . Here, we have used F_zz̅+F_xx=0 in (<ref>). We then obtain the Kähler metric of the Taub-NUT manifold as K_zz̅ = 2V+2m^2x^2r^2|z|^2V^-1 , K_zu̅ = -mxrzV^-1 , K_uz̅ = -mxrz̅V^-1 , K_uu̅ = 12V^-1 , where V=-1 2F_xx=1h+2mr . In the following, we rewrite that the Kähler metric (<ref>)-(<ref>) in terms of the spherical coordinate. First we can show that the metric is written by the Gibbons-Hawking ansatz: ds^2 = K_uu̅dudu̅+K_uz̅dudz̅+K_zu̅dzdu̅+K_zz̅dzdz̅ = -F_xxdzdz̅-(du-F_xzdz)F_xx^-1(du̅-F_xz̅dz̅) ∼ Vdr⃗· dr⃗+V^-1(dp+A⃗· dr⃗)^2 , where we have used r⃗ = (x̃, ỹ, z̃)=(z+z̅,-i(z-z̅),x) , |r⃗|=r , p = Im(u) , A⃗· dr⃗ = i 2(F_xzdz-F_xz̅dz̅) . Here the tilde symbolizes “equal, up to an overall factor 1/2". We write (<ref>) by using the spherical coordinate (r, θ, ϕ, ψ): x̃ = rsinθcosϕ , ỹ = rsinθsinϕ , z̃ = rcosθ=x , ψ = -p 2m . Then, we find ds^2=(1+2m r)(dr^2+r^2dθ^2+r^2sin^2θ dϕ^2)+4m^2(1+2m r)^-1(dψ+cosθ dϕ)^2 . where we have taken h=1. This is a well-known form of the Taub-NUT metric in terms of the spherical coordinate <cit.>. As mentioned in Introduction, the Taub-NUT manifold has the U(1)-isometry (which is tri-holomorphic) and SO(3)-isometry (which is non-tri-holomorphic). It is seen from the metric (<ref>). In Section <ref>, we will construct special Lagrangian submanifolds which are invariant under U(1) and a Lie subgroup of SO(3), SO(2). §.§ Atiyah-Hitchin manifold In this subsection, we review the construction of the Atiyah-Hitchin manifold in the generalized Legendre transform. This part is written based on <cit.>. §.§.§ F-function of the Atiyah-Hitchin manifold The F-function of the Atiyah-Hitchin manifold is written by using 𝒪(4)-multiplet η^(4) =z̅ζ^2+v̅ζ+x-vζ+zζ^2 =ρζ^2(ζ-α)(α̅ζ+1)(1+|α|^2)(ζ-β)(β̅ζ+1)(1+|β|^2) , where α,-1/α̅,β,-1/β̅ are the roots of η^(4)(ζ)=0. From the reality condition (<ref>), it is seen that x is real. The coordinates z,v,x are described by α,β,ρ (ρ>0) as z =ρα̅β̅(1+|α|^2)(1+|β|^2) , v =-ρ(α̅+β̅-|α|^2β̅-α̅|β|^2)(1+|α|^2)(1+|β|^2) , x =ρ(-α̅β-αβ̅+(1-|α|^2)(1-|β|^2))(1+|α|^2)(1+|β|^2) . The F-function of the Atiyah-Hitchin manifold is given by <cit.> F =F_2+F_1 =-12π i h∮_Γ_0dζζη^(4) +∮_Γdζζ√(η^(4)) , where the contour Γ_0 encircles the origin of the ζ-plane counterclockwise and the contour Γ=Γ_m∪Γ_m' is taken as in Fig. <ref>. Now we evaluate F_1 and F_2 in the F-function. F_2 can be easily calculated by using the Cauchy's integral formula as F_2=-xh . On the other hand, a lengthy calculation is necessary for F_1. In the following, we give a brief explanation of the calculation (in more detail, see <cit.>). We can rewrite F_1 by using the integral ℐ_n(Γ)=∮_Γζ^ndζ2ζ√(η^(4)) , n=0, 1, 2 , as F_1=2xℐ_0(Γ)-2(vℐ_1(Γ)+vℐ_1(Γ))+2(zℐ_2(Γ)+zℐ_2(Γ)) . It is possible to deform the contour Γ_m' in Γ=Γ_m∪Γ_m' to Γ_m as in Fig. <ref>. While deforming Γ_m' to Γ_m, we need to pick up the poles of the integral ℐ_n(Γ) and have ℐ_n(Γ) = 2ℐ_0(Γ_m) (n=0) , 2ℐ_1(Γ_m)+2π i(-12√(z)) (n=1) , 2ℐ_2(Γ_m)+2π i(-12√(z)·v2z) (n=2) . Substituting this into (<ref>), F_1 turns out to be F_1 =4{ xℐ_0(Γ_m)-( vℐ_1(Γ_m)-zℐ_2(Γ_m)-π i4·v√(z) +c.c) } . For the calculation of F_1, we need to evaluate ℐ_n(Γ_m) (n=0,1,2). Those can be evaluated by using the elliptic function. To this end, we consider an elliptic curve on the (ζ,η) plane C: η^2=4ζ^2η^(4)(ζ) . This elliptic curve gives a globally defined holomorphic 1-form ϖ as follows: ϖ=dζη=dζ2ζ√(η^(4)) . ϖ is called an abel-form of C and it is uniquely determined up to a constant. Here, ϖ is chosen such that it coincides with a 1-form to determine the integral ℐ_0(Γ_m). The two periods of ϖ are described by its integral over canonical cycles. Those cycles are Γ_m and Γ_l in Fig. <ref>. The two periods 2ω and 2ω' are given by 2ω=∮_Γ_mϖ , 2ω'=∮_Γ_lϖ . The half periods are written by the complete elliptic integral of the first kind K(k) ω=1√(ρ)K(k) , ω'=i√(ρ)K(k') , where k=|1+α̅β|√((1+|α|^2)(1+|β|^2)) , and k'=√(1-k^2). It is seen that ω∈ℝ and ω'∈ iℝ. In the following, we express as ω_1=ω , ω_2=ω+ω' , ω_3=ω' . We rewrite the curve C to the Weierstrass normal form. To do this, we first make use of the following birational transformation: ν=[ζ,-1/α̅,α,β] =(ζ-α)(1+α̅β)(ζ-β)(1+|α|^2) , μ=η∂ν∂ζ . Then, the curve C in (<ref>) is expressed as a Riemann normal form: μ^2=4ρν(ν-1)(ν-k^2) , and the four roots α, -1/β̅, -1/α̅, β correspond to 0, k^2, 1, ∞, respectively. Next, we change the variables (ν, μ) to (X,Y) by the following transformations: ν= Xρ+1+k^23 , μ=Yρ . The curve C is then expressed as the Weierstrass normal form Y^2 =4(X-e_1)(X-e_2)(X-e_3)=4X^3-g_2X-g_3 , where e_1=-ρ3(k^2-2) , e_2=ρ3(2k^2-1) , e_3=-ρ3(k^2+1) . It is verified that those roots satisfy the following relations: e_1+e_2+e_3=0 , e_1e_2+e_2e_3+e_3e_1=-g_24 , e_1e_2e_3=g_34 , and e_1-e_3=ρ , e_2-e_3e_1-e_3=k^2 . From (<ref>), we find g_2=43ρ^2(1-k^2+k^4) , g_3=427ρ^3(k^2-2)(2k^2-1)(k^2+1) . Those yield the discriminant of C as Δ=g_2^3-27g_3^2=16ρ^6k^4k'^4≠ 0. Therefore, the curve C is regular. Next we shall rewrite ℐ_n(Γ_m) to the Weierstrass normal form. We denote by X_ζ the image of ζ by the transformation which combines (<ref>) and (<ref>): (ζ-α)(1+α̅β)(ζ-β)(1+|α|^2)=X-e_3e_1-e_3 . Under this convention, we have X_0 = e_3+ρ·αβ1+α̅β1+|α|^2 , X_∞ = e_3+ρ·1+α̅β1+|α|^2 . Then, (<ref>) is rewritten as ζ=βX-X_0X-X_∞ . By using this transformation, the abel form ϖ and the integral I_n(Γ_m) are expressed as ϖ=dXY=dX√(4X^3-g_2X-g_3) , and ℐ_n(Γ_m) = ∮_Γ_m(βX-X_0X-X_∞)^ndXY , respectively. Here, the contour Γ_m on ζ-plane is mapped to one on X-plane via (<ref>), which we write the same symbol, namely, Γ_m. This winds once around the branch-cut between the roots e_3 and e_2 (resp. the roots e_2 and e_1) on X-plane. In the following, we evaluate (<ref>). Clearly, (<ref>) with n=0 case is the period of the torus: ℐ_0(Γ_m) =∮_Γ_mdXY=2ω_1 . ℐ_n(Γ_m) with n=1,2 case can be evaluated by using the Weierstrass elliptic function. For preparation, we briefly review the Weierstrass ℘-function. Let Λ denote the orthogonal lattice in ℂ defined by Λ=ℤ· 2ω⊕ℤ· 2ω^'. The Weierstrass ℘-function ℘(u)=℘(u, Λ) is defined by ℘(u)=1u^2+∑_λ∈Λ-{0}{1(u-λ)^2-1λ^2} , u∈ℂ . This function is even and has the double periodicity, that is, ℘(u+2ω)=℘(u) , ℘(u+2ω')=℘(u) , u∈ℂ . ℘(u) also satisfies the following differential equation: (℘'(u))^2=4℘(u)^3-g_2℘(u)-g_3 . We denote by C^* the projectivization of C, i.e., C^* ={[x_0,x_1,x_2]∈ℂP^2| x_0x_2^2=4x_1^3-g_2x_0^2x_1-g_3x_0^3} . Thanks to (<ref>), the ℘-function induces a function on the torus ℂ/Λ, which we write the same symbol ℘(u). Then, we obtain a map ψ:ℂ/Λ→ℂP^2 defined by ψ(u)=[1, ℘(u),℘'(u)]=[1, X,Y] , u∈ℂ/Λ . ψ gives an isomorphism between ℂ/Λ and C^*. We call this map the abel map of C^*. Through this map, we express u_ζ∈ℂ/Λ as the element corresponding to X_ζ (ζ∈ℂ∪{∞}): X_ζ=℘(u_ζ) . Furthermore, we set Y_ζ=℘'(u_ζ). It is convenient to divide u_ζ into the real part and the imaginary part with respect to the antiholomorphic involution ζ↦ -1/ζ̅ on ℂ∪{∞}, that is, u_ζ^± =u_ζ± u_-1/ζ̅ , so that we have u_∞^±=u_∞± u_0 . We write (x_±,y_±) as the (X,Y)-coordinates of the point corresponding to u_∞^± via the abel map ψ, (<ref>). Then, we can prove that the following relations x_±= x± 6|z|3 , and y_+=iv_+(x_+-x_-) , y_+=v_-(x_--x_+) , hold, where v_+=Imv√(z) , v_-=Rev√(z) . Now we are ready to evaluate ℐ_1(Γ_m). By using (<ref>), this is calculated as ℐ_1(Γ_m) =β{∮_Γ_mdXY+X_∞-X_0Y_∞∮_Γ_mY_∞X-X_∞dXY} =2βω+βX_∞-X_0Y_∞∮_Γ_mY_∞X-X_∞dXY . To evaluate the second term in (<ref>), we consider the following integral π(X_ζ) =-∮_Γ_mY_ζX-X_ζdXY=-2∫_e_3^e_2Y_ζX-X_ζdXY . By using the abel map (<ref>), this integral can be expressed by ℘-function as π(X_ζ) =-2∫_ω_3^ω_2℘'(u_ζ)℘(u)-℘(u_ζ)du , where we have used e_j=℘(ω_j). With the use of the ζ-function, ζ(u)=ζ(u;Λ) and σ-function, σ(u)=σ(u;Λ), the integrand in (<ref>) can be written by ℘'(u_ζ)℘(u)-℘(u_ζ) =-ζ(u+u_ζ)+ζ(u-u_ζ)+2ζ(u_ζ) =-ddulogσ(u+u_ζ)+dduσ(u-u_ζ)+2ζ(u_ζ) , so, we find π(X_ζ) =2logσ(ω_2+u_ζ)σ(ω_2-u_ζ) -2logσ(ω_3+u_ζ)σ(ω_3-u_ζ)-4ω_1ζ(u_ζ) . Here, we use the monodromy property of σ-function for j=2,3 σ(ω_j+u_ζ) =e^2η_ju_ζσ(ω_j-u_ζ) , so that the following relation holds: 2logσ(ω_j+u_ζ)σ(ω_j-u_ζ) = 4η_ju_ζ (mod 2π iℤ) , where η_j is the quasi-half period of the curve C. For instance, η_1 is defined by (in more detail on η_j, see Appendix A in <cit.>) η_1=-∫_e_3^e_2XdXY =-12∮_Γ_mXdXY . Substituting (<ref>) into (<ref>), we have π(X_ζ)=4( [ u_ζ ω_1; ζ(u_ζ) ζ(ω_1) ]) (mod 2π iℤ) . We need to evaluate this with ζ→∞, π(X_∞). From (<ref>), we have u_∞=12(u_∞^++u_∞^-) , and therefore, we find ( [ u_∞ ω_1; ζ(u_∞) ζ(ω_1) ]) =12{( [ u_∞^+ ω_1; ζ(u_∞^+) ζ(ω_1) ])+( [ u_∞^- ω_1; ζ(u_∞^-1) ζ(ω_1) ])+ω_1Y_∞X_∞-X_0} . Using this equation, π(X_∞) can be calculated as π(X_∞) =12{π(x_+)+π(x_-) }+2ω_1Y_∞X_∞-X_0 (mod π iℤ) . There exists a∈ℤ such that π(X_∞) =12{π(x_+)+π(x_-) }+2ω_1Y_∞X_∞-X_0+aπ i . We substitute (<ref>) into (<ref>) and finally find ℐ_1(Γ_m) =2βω_1-βX_∞-X_0Y_∞[ 12{π(x_+)+π(x_-) }+2ω_1Y_∞X_∞-X_0+aπ i ] =14√(z){π(x_+)+π(x_-) +2aπ i} , where we have used the following relation in the second equality: Y_∞X_0-X_∞=2β√(z) . Next we evaluate (<ref>) for n=2 case. The integrand of ℐ_2(Γ_m) can be transformed as (βX-X_0X-X_∞)^2 =β^2 + 2β^2X_∞-X_0Y_∞·Y_∞X-X_∞ +β^2(X_∞-X_0Y_∞)^2( Y_∞X-X_∞)^2 . The integral of the first term can be easily calculated from (<ref>). The integral of the second term is evaluated from (<ref>) and (<ref>) as ∮_Γ_m 2β^2X_∞-X_0Y_∞·Y_∞X-X_∞dXY =β2√(z){π(x_+)+π(x_-)+2aπ i }-4β^2ω_1 . To calculated the integral of the third term, we observe ( Y_∞X-X_∞)^2 =2(X-X_∞)-12X_∞^2-g_22Y_∞·Y_∞X-X_∞ -YdX(YX-X_∞) . The integral of the first term in (<ref>) is found to be ∮_Γ_m2(X-X_∞)dXY=-4η_1-4ω_1(x3-β v+2β^2z) , where we have used (<ref>) and X_∞= x3-β v+2β^2z . Using the following relation 12X_∞^2-g_24(X_0-X_∞) =β v-4β^2z , we can calculate the integral of the second term in (<ref>): ∮_Γ_m12X_∞^2-g_22Y_∞·Y_∞X-X_∞dXY =-12(v√(z)-4β√(z)) {π(x_+)+π(x_-)+2aπ i }+4ω_1(β v-4β^2z) . The integral of the third term in (<ref>) can be evaluated as ∮_Γ_mYddx(YX-X_∞)dXY =∮_Γ_md(YX-X_∞)=0 . From (<ref>), (<ref>) and (<ref>), we obtain the integral of the third term in (<ref>) ∮_Γ_mβ^2(X_∞-X_0Y_∞)^2( Y_∞X-X_∞)^2dXY =12z{ -4η_1-4ω_1( x3-2β^2z )+12{π(x_+)+π(x_-)} +2aπ i } . Thus, by using (<ref>) and (<ref>), we conclude ℐ_2(Γ_m) =-1z[ η_1+ω_1·x3-18v√(z){π(x_+)+π(x_-)+2aπ i }] . We return to the calculation of F_1. Using (<ref>) and (<ref>), we have vℐ_1(Γ_m)-zℐ_2(Γ_m)-π i4v√(z) =η_1+ω_1·x 3+v8√(z){π(x_+)+π(x_-)}+π i4(a-1)·v√(z) . Thanks to (<ref>), we find that the integrands of π(x_+) =-2∫_e_3^e_2y_+X-x_+dXY, π(x_-) =-2∫_e_3^e_2y_-X-x_-dXY , are pure imaginary and real, respectively, so that we have π(x_+)∈ iℝ, π(x_-)∈ℝ . Using this fact, we find vℐ_1(Γ_m)-zℐ_2(Γ_m)-π i4v√(z) +c.c =2η_1+ω_1·2x3+14{iv_+π(x_+)+v_-π(x_-)}-π i2(a-1)v_+ . Therefore, substituting (<ref>) and (<ref>) into (<ref>), we obtain F_1=-8η_1+8(x_++x_-)ω_1 -{ iv_+π(x_+)+v_-π(x_-) }+2π(a-1)v_+ . Here, we have used x=6(x_++x_-). Thus, the final result for F=F_2+F_1 is F=-8η_1+(8ω_1-32h) (x_++x_-) -{ iv_+π(x_+)+v_-π(x_-) }+2π(a-1)v_+ . §.§.§ Derivation of the Atiyah-Hitchin metric The Kähler potential of the Atiyah-Hitchin manifold is obtained by the following generalized Legendre transformation: K(u,u̅,z,z̅) =F(x,z,z̅,v,v̅)-(uv+u̅v̅) , with the conditions (<ref>) and (<ref>). For the Atiyah-Hitchin manifold, they are reduced to ∂ F∂ v=u , ∂ F∂ x=0 . Here the Kähler metric satisfies the hyperkähler Monge-Ampère equation (<ref>). Let us consider the first condition in (<ref>). This yields u=-12·1√(z){π(x_+)+π(x_-)}-π i√(z)(a-1) . From this, we find uv+u̅v̅ =-{iv_+π(x_+)+v_-π(x_-)}+2(a-1)π v_+ . On the other hand, since we find ∂ F∂ x =-1h+4ω_1 , the second condition in (<ref>) gives 1h=4ω_1 . Substituting (<ref>), (<ref>), and (<ref>) into (<ref>), we find K=-8η_1+2(x_++x_-)ω_1 . From this Kähler potential, we shall derive the Kähler metric. We introduce the holomorphic coordinate U, Z: U=u√(z) , Z=2√(z) . The holomorphic coordinate change (u, z)↦(U, Z) preserves hyperkähler Monge-Ampère equation (<ref>), namely, ( [ K_UU̅ K_UZ̅; K_ZU̅ K_ZZ̅ ])=1 . In the following, we calculate K_ZZ̅, K_ZU̅, K_UZ̅, K_UU̅. From (<ref>), we have U = -12{π(x_+)+π(x_-)}-π i(a-1) , U̅ = -12{-π(x_+)+π(x_-)}+π i(a-1) . Their derivatives are dU = -12{dπ(x_+)+dπ(x_-)} , dU̅ = -12{-dπ(x_+)+dπ(x_-)} . Here, it is shown that dπ(x_±) have the following form dπ(x_±) =4A_± dx_± +8(x_±^2-Vη_1)y_±dω_1-8B_± dη_1 , where A_± = x_±ω_1+η_1y_± , B_± = x_±+Vω_1y_± , V=-3g_3ω_1+2g_2η_112η_1^2-g_2ω_1^2 . Its detailed derivation is given in Appendix C.2 in <cit.>. From (<ref>), we find dω_1=0. Therefore, dπ(x_±) reduces to dπ(x_±)=4A_±dx_±-8B_±dη_1 . Therefore, we obtain dU∓ dU̅=-4A_±dx_±+8B_±dη_1 . From |Z|^2=x_+-x_-, we have Z̅dZ+ZdZ̅=dx_+-dx_- . Utilizing (<ref>) and (<ref>), we have the following relation: A_-(dU-dU̅)-A_+(dU+dU̅) =-4A_+A_-(Z̅dZ+ZdZ̅)+8(A_-B_+-A_+B_-)dη_1 . We obtain dη_1 by solving this as dη_1=A_-(dU-dU̅)-A_+(dU+dU̅)+4A_+A_-(Z̅dZ+ZdZ̅)8(A_-B_+-A_+B_-) . Similarly, we have (-B_++B_-)dU-(B_++B_-)dU̅ =4(A_-B_+-A_+B_-)dx_±-4A_∓B_±(Z̅dZ+ZdZ̅) , from which, we find dx_± as dx_± =(-B_++B_-)dU-(B_++B_-)dU̅+4A_∓B_±(Z̅dZ+ZdZ̅)4(A_-B_+-A_+B_-) . We are ready to calculate the components of the Kähler metric. Differentiating (<ref>) with respect to Z, we find K_Z = 2Z̅-2A_+A_-+(A_-B_++A_+B_-)ω_1A_-B_+-A_+B_- =-2Z{2η_1+(x_++x_-)ω_1} . Further differentiating this with respect to Z̅, we obtain K_ZZ̅. Similarly, we can calculate K_UZ̅ and K_ZU̅. The component K_UU̅ is obtained by solving (<ref>) with respect to K_UU̅. The components are explicitly given by K_ZZ̅ = -2(A_+A_-)+2(A_-B_++A_+B_-)ω_1A_-B_+-A_+B_- , K_UZ̅ =-12Z̅A_--A_++2(-B_++B_-)ω_1A_-B_+-A_+B_- , K_ZU̅ =12ZA_-+A_++2(B_++B_-)ω_1A_-B_+-A_+B_- , K_UU̅ =1K_ZZ̅(1+K_ZU̅K_UZ̅) . As explained in Introduction, the Atiyah-Hitchin manifold has SO(3)-isometry. In Section <ref>, we consider its Lie subgroup SO(2) and construct a special Lagrangian submanifold invariant under that symmetry. § SPECIAL LAGRANGIAN SUBMANIFOLD IN CALABI-YAU MANIFOLD In this section, we review on special Lagrangian submanifolds in a Calabi-Yau manifold and explain our method to construct special Lagrangian submanifolds with a large symmetry, which was originally given by Joyce <cit.>. We first recall the definition of special Lagrangian submanifolds in a general setting. Next, we explain that any special Lagrangian submanifold is a calibrated submanifold <cit.>. From this, we obtain an equivalence condition that any Lagrangian submanifold becomes a special Lagrangian submanifold. Finally, we review the moment map approach to construct special Lagrangian submanifolds in a Calabi-Yau manifold in the case when the Calabi-Yau manifold admits a Lie group which acts on it Hamiltonianly. §.§ Definition of special Lagrangian submanifold In this subsection, we recall the definition of special Lagrangian submanifolds in the context of Calabi-Yau geometry. Let (M,J, ω,Ω) be a complex n-dimensional Calabi-Yau manifold and g be the Riemannian metric on M defined by g(X,Y)=ω(X,JY) for any vector fields X,Y. Here the Kähler form ω gives a closed non-degenerate 2-form on M, so that (M,ω) becomes a symplectic manifold. Let L be a Lagrangian submanifold of M with respect to ω. Namely, L is a real n-dimensional submanifold in M satisfying ω|_L=0 . The submanifold L has a Riemannian metric which is induced from the metric g on M. We denote by dvol_L is the volume form on L, which gives an n-form on L. The restriction of the complex volume form Ω to L, which we write Ω|_L, is also an n-form of L. If L is oriented, then the function on L called the phase function θ: L → S^1 = ℝ/2πℤ is defined by: Ω|_L = e^-√(-1)θdvol_L . A de-Rham cohomology class of the closed 1-form dθ is known as the Maslov class of L. In this context, L is called a special Lagrangian submanifold if the phase function θ is constant, namely, the de-Rham cohomology class of dθ vanishes. §.§ Special Lagrangian submanifold as calibrated submanifold In the context of Riemannian geometry, Harvey and Lawson <cit.> introduced the notion of calibrated submanifolds. They also proved that any special Lagrangian submanifold is an example of calibrated submanifolds in the case when the ambient Riemannian manifold has a Calabi-Yau structure. §.§.§ Calibrated submanifold In this subsection, we first recall the definition of calibrated submanifolds and give an equivalence condition that any Lagrangian submanifold becomes a special Lagrangian submanifold from the view point of the calibrated submanifolds. Let (M,g) be a Riemannian manifold. A closed differential n-form η on M is called a calibration if it satisfies the following inequality: η(e_1,…,e_n)≤ 1 , where e_1,…,e_n are (oriented) orthonormal vectors on M. Then, for any n-dimensional submanifold L in M, the following relation holds: η|_L≤ dvol_L , where η|_L is the restriction of η to L. A calibrated submanifold L in M is defined as a submanifold which attains the equality in (<ref>) for all orthonormal bases (e_1,…,e_n) of T_pL, p∈ L, equivalently, η|_L=dvol_L . Let (M,J,g,Ω) be a Calabi-Yau manifold with complex dimension n. By using a constant θ∈ℝ, we define a differential n-form on the Calabi-Yau manifold M as η≡Re(e^√(-1)θΩ). It is known that this η gives calibration on the Riemannian manifold (M,g) and the corresponding calibrated submanifold is a special Lagrangian submanifold with the phase θ <cit.>. Furthermore, for a Lagrangian submanifold L, it can be shown that L is a special Lagrangian submanifold with phase θ if and only if L satisfies the following equation <cit.> Im(e^√(-1)θΩ)|_L=0 . In Section <ref>, we make use of this condition to construct special Lagrangian submanifolds in the Taub-NUT manifold and in the Atiyah-Hitchin manifold. §.§.§ Homologically volume-minimality of calibrated submanifold The study of special Lagrangian submanifolds is attractive not only in Calabi-Yau geometry but also in Riemannian geometry. In fact, it is known that any calibrated submanifolds has a remarkable property called the homological volume-minimality. Let (M,g) be a Riemannian manifold and η∈Ω^n(M) be a calibration on M. We observe that any calibrated submanifold with respect to η has a volume minimality in the same homology class. Namely, if L is a calibrated submanifold in M with respect to η, then Vol(L)≤Vol(L') holds for any submanifold L' which is homologous to L. Here, Vol(L) and Vol(L') denote the volumes of L and L', respectively. To verify this inequality, let P be an (n+1)-dimensional submanifold with boundary ∂ P≡ L∪ (-L'). According to Stokes' theorem, we find 0=∫_Pdη =∫_∂ Pη =∫_Lη-∫_L'η . Thus, we have ∫_Lη=∫_L'η , from which the following relation holds: Vol(L) =∫_Ldvol_L =∫_Lη = ∫_L'η≤∫_L'dvol_L' =Vol(L') . Here, the second equality follows from that L is a calibrated submanifold, and the inequality follows from that η is a calibration. It has been shown that L is a submanifold with a minimal volume in the same homology class. In particular, a special Lagrangian submanifold has also the same property in the case when M is a Calabi-Yau manifold. §.§ Review on construction of Lagrangian submanifolds by moment map approach Let us consider the case when a Calabi-Yau manifold admits a Lie group which acts on it Hamiltonianly. In such a case, Joyce <cit.> proposed a method to construct special Lagrangian submanifolds in the Calabi-Yau manifold, which is called the moment map approach. Then, the action defines the moment map on the Calabi-Yau manifold. By this method, we get special Lagrangian submanifolds with a large symmetry. More precisely, in order to construct special Lagrangian submanifolds in the complex n-dimensional Calabi-Yau manifold M, it is sufficient to give real n-dimensional submanifolds satisfying (<ref>) and (<ref>). The condition (<ref>) gives Lagrangian submanifolds while the condition (<ref>) constrains it to be special Lagrangian submanifolds. By using the moment map, we rewrite the condition (<ref>) as shown in Subsection <ref>. The obtained Lagrangian submanifolds have symmetry of cohomogeneity one. Furthermore, we can solve (<ref>) with the use of cohomogeneity-one symmetry. This property simplifies (<ref>): This condition is generally given by a nonlinear partial differential equation, but it is reduced to an ordinary differential equation due to the cohomogeneity-one symmetry. Thus, by solving the ordinary differential equation, we obtain special Lagrangian submanifolds in the Calabi-Yau manifold. In this subsection, we start with reviewing some basic notions related to the moment map approach to construct Lagrangian submanifolds. §.§.§ Cohomogeneity-one action Let M be a smooth manifold. Let H be a compact connected Lie group. We denote by e the identity element in H. A group action of H on M is defined as a smooth map φ: K × M → M such that φ(g, φ(h,p)) = φ(gh, p), φ(e,p) = p, p∈ M , g,h∈ H . In this case, for each h ∈ H, the map φ_h: M → M; p ↦φ(h, p) is a diffeomorphism on M. We denote φ(h,p) simply as h · p. For any element p ∈ M, the set H· p:={h· p| h∈ H} is called the H-orbit through p. The subgroup H_p:={h ∈ H | h · p = p} of H is called the isotropy subgroup at p. We also write the conjugacy class of H_p as [H_p]:={h H_p h^-1| h ∈ H}. Here, h H_p h^-1 coincides with the isotropy subgroup H_h· p at the point h· p. For two points p, q ∈ M, their H-orbits satisfy either H · p = H · q or (H · p) ∩ (H · q) = ∅. Therefore, we can define an equivalence relation based on whether two points p, q ∈ M belong to the same H-orbit. The quotient space of M under this equivalence relation is called the orbit space, which is denoted as H\ M={H· p | p∈ M}. Among the H-orbits, the one with the maximum dimension is called the regular orbit, and those with dimensions lower than the regular orbit are called singular orbits. The H-action on M is said to have cohomogeneity one if the codimension of the regular orbit in M is equal to one. The orbit space of a cohomogeneity-one action becomes a 1-dimensional manifold (with boundaries). Clearly, in the case when M is m-dimensional, the dimensions of the regular orbits of any cohomogeneity-one action on M are equal to m-1. The rotation of the sphere S^2={x∈ℝ^3|x=1} around the z-axis defines an SO(2)-action on S^2. Each SO(2)-orbit is determined by the intersection of S^2 and the plane {z=k} parallel to the xy-plane. The orbits corresponding to -1<k<1 are the regular orbits, while the orbits corresponding to k=± 1 are the singular orbits. In particular, since the regular orbits are isomorphic to S^1, we find that this action is of cohomogeneity one. The orbit space for the SO(2)-action on S^2 becomes SO(2)\ S^2 ≅ [-1,1]. §.§.§ Symplectic action and Hamiltonian action We first recall the definition of symplectic action. Let (M, ω) be a symplectic manifold and H be a compact connected Lie group. An action φ:H× M→ M of H on (M,ω) is said to be symplectic, if the following equation holds: φ_h^*ω = ω , h∈ H , that is, for any p ∈ M and X,Y∈ T_pM, ω_φ_h(p)((φ_h)_*X, (φ_h)_*Y) = ω_p(X, Y), h∈ H . Let 𝔥 denote the Lie algebra of H, and exp: 𝔥→ H denote the exponential map of H. For each X ∈𝔥, the tangent vector field X^* on M is defined by: X^*_p = d/dt|_t=0φ(exp(tX), p), p ∈ M , which is called the fundamental vector field associated with X. Then, (<ref>) yields ℒ_X^*ω = 0 , for all X∈𝔥, where ℒ_X^* denotes the Lie derivative with respect to X^*. Furthermore, (<ref>) is equivalent that the 1-form ι_X^*ω is closed. Here, ι_X^*:Ω^k(M)→Ω^k-1(M) denotes the interior product of X^*: ι_X^*(α)(X_1, …, X_k-1) = α(X^*, X_1, …, X_k-1) , α∈Ω^k(M) , X_1,…,X_k-1∈𝔛(M) . Let Ad:H→GL(𝔥) denote the adjoint representation of H. Let 𝔥^* denote the dual vector space of 𝔥, that is, the space of linear functionals on 𝔥. We write ··:𝔥^*×𝔥→ℝ as the pairing between 𝔥^* and 𝔥. The coadjoint representation Ad^*: H →GL(𝔥^*) is defined by: ⟨Ad^*(h)X, Y ⟩ = ⟨ X, Ad(h^-1)Y ⟩, h ∈ H, X ∈𝔥^*, Y ∈𝔥. For a symplectic action φ: H × M → M on (M, ω), an H-equivariant map μ:M→𝔥^* satisfying the following relation is called a moment map: ι_X^*ω = dμ_X , X∈𝔥 , where μ_X is a function on M defined by μ(p)(X) = μ_X(p) , p∈ M , X∈𝔥 . Here, the H-equivariance of the moment map μ means μ(h · p) = Ad^*(h)(μ(p)), p ∈ M, h ∈ H. A symplectic action φ: H × M → M on (M, ω) is called a Hamiltonian action if there exists a moment map μ:M→𝔥^*. Translation and rotation within ℝ^3 are examples of Hamiltonian actions. Through this example, we can understand that the moment map is related to momentum and angular momentum, respectively. We identify the cotangent bundle T^*ℝ^3 of ℝ^3 with ℝ^6={(q,p)| q,p∈ℝ^3}. On T^*ℝ^3=ℝ^6, the standard symplectic form is defined as follows: ω=dq^1∧ dp^1+dq^2∧ dp^2+dq^3∧ dp^3. The group H=SO(3)={A∈ℝ^3× 3|^tAA=E_3} represents rotations around the origin in ℝ^3. We extend its action on ℝ^3 to that on T^*ℝ^3=ℝ^6, namely, φ: SO(3)×ℝ^6→ℝ^6;  (A,(q,p)) ↦ (Aq, Ap) . In what follows, we confirm that the action φ is a Hamiltonian action. Let J_i (i=1,2,3) denote the element in the Lie algebra 𝔥=𝔰𝔬(3) defined by J_1 =( [ 0 0 0; 0 0 1; 0 -1 0 ]) , J_2 =( [ 0 0 1; 0 0 0; -1 0 0 ]) , J_3 =( [ 0 -1 0; 1 0 0; 0 0 0 ]) , which consist of a basis of 𝔥. Then we identify 𝔰𝔬(3) with ℝ^3 as vector spaces via J_i↦ e_i, where {e_i}_i=1,2,3 denotes the standard basis of ℝ^3. For an element a=a^1J_1+a^2J_2+a^3J_3=(a^1,a^2,a^3) in the Lie algebra 𝔥, the basic vector field a^* can be expressed using the cross product × on ℝ^3 as follows: a^*_(q,p) = (a× q, a× p) =(a^2q^3-a^3q^2,-a^1q^3+a^3q^1,a^1q^2-a^2q^1, = a^2p^3-a^3p^2,-a^1p^3+a^3p^1,a^1p^2-a^2p^1) . Hence the interior product ι_a^*ω can be written as: ι_a^*ω =ι_(a× q,a× p)(dq^1∧ dp^1+dq^2∧ dp^2+dq^3∧ dp^3) =a^1(q^2dp^3+p^3dq^2-q^3dp^2-p^2dq^3) = +a^2(q^3dp^1+p^1dq^3-q^1dp^3-p^3dq^1) = +a^3(q^1dp^2+p^2dq^1-q^2dp^1-p^1dq^2) . On the other hand, we define the mapping μ: ℝ^6→𝔥^*=𝔰𝔬(3)^* as follows: μ(q,p)(a) =q× pa =a^1(q^2p^3-q^3p^2)+a^2(-q^1p^3+q^3p^1)+a^3(q^1p^2-q^2p^1) , for (q,p)∈ℝ^6, a=(a^1,a^2,a^3)∈ℝ^3≅𝔥, where ·· denotes the standard inner product on ℝ^3. Then it follows from (<ref>) and (<ref>) that dμ_a=ι_a^*ω holds. We denote by {J^i}_i=1,2,3 the dual basis of 𝔥^*=𝔰𝔬(3)^* for the above basis {J_i}_i=1,2,3. The correspondence J^i↦ e_i yields an identification between 𝔥^*=𝔰𝔬(3)^* and ℝ^3. Hence we have μ(A· (q,p))(a) = μ(Aq,Ap)(a) =A(q× p)a =(Ad^*(A)μ(q,p))(a) , for all A∈ SO(3). This shows the SO(3)-equivariance of μ. Consequently, we have confirmed that the action φ is Hamiltonian. §.§.§ Moment map approach to construct Lagrangian submanifolds The condition (<ref>) is nothing but the requirement for a submanifold to be isotropic. We explain the isotropicity in terms of the moment map. Our argument is based on <cit.>. Let (M, ω) be a real 2n-dimensional symplectic manifold and H be a compact connected Lie group which acts Hamiltonianly on (M,ω). We denote by μ: M →𝔥^* the moment map associated with this Hamiltonian action. Let L be a connected, H-invariant submanifold in M. The H-invariance of L implies that the fundamental vector fields X^* (X∈𝔥) are tangent to the submanifold L. In the case when L is isotropic, for any X∈𝔥, the smooth function μ_X∈ C^∞(M) defined in (<ref>) is constant on L because dμ_X(Y)=ι_X^*ω(Y)=ω(X^*,Y)=0, Y∈𝔛(L). Here, we have used the isotropicity of L in the last equality. Hence, by (<ref>), the moment map μ: M →𝔥^* is also constant on L, that is, there exists c∈𝔥^* such that L is contained in the level set μ^-1(c):={p ∈ M |μ(p)=c}, L⊂μ^-1(c) . Then, for an element p in L, we write c=μ(p). For any h∈ H, by the H-equivariance of μ, we have Ad^*(h)c =Ad^*(h)μ(p) =μ(h· p). Since L is H-invariant, we have μ(h· p)=c, so that we obtain Ad^*(h)c=c. This means that the element c is contained in the subset Z(𝔥^*) := {ξ∈𝔥^* |Ad^*(h) ξ = ξ, h ∈ H} of 𝔥^*. Conversely, for any connected, H-invariant submanifold L in M satisfying (<ref>) for some c∈ Z(𝔥^*), it is shown that, if the H-action on L induced from that on M is cohomogeneity one, then L becomes an isotropic submanifold in (M,ω). Indeed, (<ref>) yields ω(X^*,Y)=dμ_X(Y)=0 , X∈𝔥,Y∈𝔛(L). Let p∈ L be an element in L such that the H-orbit through this point is regular. The space span_ℝ{X^*_p| X∈𝔥} gives a codimension one subspace of T_pL. Let v∈ T_pL be a non-zero vector which is orthogonal to span_ℝ{X^*_p| X∈𝔥}. Since any tangent vector Y_p∈ T_pL is decomposed into Y_p=X^*+a v for some X∈𝔥 and a∈ℝ, we have ω_p(v,Y_p)=ω_p(v,X^*)+a ω_p(v,v)=0. This implies that ω_p is isotropic at p∈ L. By the arbitrariness of p, ω vanishes on a open subset of L. By the connectedness of L, ω vanishes on the whole L, and therefore L is isotropic. From the above argument, in order to obtain Lagrangian submanifolds, it is sufficient to construct connected, H-invariant submanifolds L in M satisfying (<ref>) and 2(L)=(M). §.§ Construction of special Lagrangian submanifolds by using moment map Let (M, J, ω, Ω) be a Calabi-Yau manifold. Let H be a compact connected Lie group which acts Hamiltonianly on M. Then, we write μ:M→𝔥^* as the moment map associated with this Hamiltonian action. As discussed in Sections <ref> and <ref>, we may assume that M has dimension 4, and that H is a 1-dimensional Lie subgroup of the isometry group of M. First, based on the argument in Subsection <ref>, we explain the method for constructing Lagrangian submanifolds in (M,ω). Let c ∈ Z(𝔥^*). We consider a curve ℓ: I → M; t ↦ℓ(t), in M such that its image is contained in μ^-1(c), and at each point, the curve intersects the H-orbit passing through that point transversally. The later condition guarantees that L = H ·ℓ := {h ·ℓ(t) | h ∈ H, t ∈ I}, gives a 2-dimensional (i.e., half dimensional) submanifold in M. By the H-equivariance of the moment map μ, the former condition implies that L is contained in μ^-1(c). Since the action of H on L has cohomogeneity one, L becomes a Lagrangian submanifold as shown in Subsection <ref>. Next, we find special Lagrangian submanifolds among such Lagrangian submanifolds L, i.e., those satisfying the condition (<ref>). If the condition (<ref>) is satisfied at a point p of L, then it is satisfied at every point of the orbit H · p. Therefore, it is sufficient to demonstrate that the condition is satisfied at the point ℓ(t) on the curve ℓ for each t ∈ I. Since H is 1-dimensional, it can be expressed as H={exp(tX) | t ∈ℝ} for some non-zero X ∈𝔥. In this case, the tangent space T_ℓ(t)L of L at the point ℓ(t) is spanned by X^*_ℓ(t) and the velocity vector ℓ̇(t) of the curve ℓ since L has cohomogeneity one. Therefore, the condition (<ref>) can be expressed as: Im(e^√(-1)θΩ(ℓ̇(t),X^*_ℓ(t)))=0 . From the above argument, we have constructed a special Lagrangian submanifold L=H ·ℓ for the phase θ by giving a solution curve ℓ for the ordinary differential equation (<ref>). § CONSTRUCTION OF SPECIAL LAGRANGIAN MANIFOLDS IN THE TAUB-NUT MANIFOLD In this section, we construct special Lagrangian submanifolds in the Taub-NUT manifold according to Subsections <ref> and <ref>. §.§ The U(1) tri-holomorphic isometry §.§.§ Deriving the conditions for being special Lagrangian The following vector field X generates a 1-parameter transformation group H≅ U(1) that acts Hamiltonianly on the Taub-NUT manifold M: X=i( ∂∂ u-∂∂u̅) . In fact, the following function μ gives the corresponding moment map (up to constant): μ=12x . In order to verify this, since U(1) is abelian, it is sufficient to show that ι_Xω=dμ. To do this, we express ι_Xω and dμ as linear combinations of du,du̅,dz and dz̅ and verify that each component is the same. Direct calculation yields ι_X(du∧ du̅)=i(du+du̅) , ι_X(du ∧ dz̅) = idz̅ , ι_X(dz ∧ du̅) = idz , ι_X(dz ∧ dz̅) = 0 . Hence, by (<ref>), we have ι_Xω =i2{ K_uu̅ι_X(du∧ du̅) +K_uz̅ι_X(du∧ dz̅) +K_zu̅ι_X(dz∧ du̅) +K_zz̅ι_X(dz∧ dz̅) } =12F_xx^-1((du+du̅)-F_xzdz-F_xz̅dz̅) . Here, we have used (<ref>)–(<ref>) in the last equality. On the other hand, (<ref>) yields dx = ∂ x∂ udu + ∂ x∂u̅du̅ + ∂ x∂ zdz + ∂ x∂z̅dz̅ = F_xx^-1((du + du̅) - F_xzdz - F_xz̅dz̅) . Thus, we obtain the moment map as (<ref>). Following Subsection <ref>, we first consider a curve in the Taub-NUT manifold M, which we write ℓ:I → M; t↦ℓ(t) =(u(t),u̅(t),z(t),z̅(t)) , where I is an open interval in ℝ. We set L=H·ℓ and define tangent vectors v_1,v_2∈ T_ℓ(t)L as follows: v_1 = X_ℓ(t)=i(∂∂ u-∂∂u̅) , v_2 = ℓ̇(t) =u̇∂∂ u+u̇̅̇∂∂u̅+ż∂∂ z+ż̅̇∂∂z̅ . In order to guarantee that ℓ intersects transversally the H-orbit H·ℓ(t) at ℓ(t) for each t∈ I, {v_1,v_2} must be linearly independent, equivalently, Re(u̇(t))≠ 0 . Then {v_1,v_2} provides a basis for T_ℓ(t)L. From (<ref>), if x takes a constant value c∈ℝ≅ Z(𝔲(1)^*) on ℓ, then L=H ·ℓ is contained in μ^-1(c). This means that L becomes a Lagrangian submanifold in M. Next, we rewrite the condition (<ref>) for L. For simplicity, we may assume that the phase θ=0. Then, we have: Ω(v_1,v_2) =du∧ dz(v_1,v_2) =iż(t) , so that Im(Ω(v_1,v_2))=0 becomes: Re(ż(t))=0 , equivalently, Re(z(t))=constant . From the above argument we have obtained the conditions for ℓ such that L=H·ℓ is a special Lagrangian submanifold in M. The conditions we obtained are the same ones derived in <cit.> [Our definition of Ω is different from one in <cit.> by √(-1). Considering this, our conditions and the conditions in <cit.> are the same.]. Note that we have derived them based on the generalized Legendre transform approach together with the moment map technique. §.§.§ Analysis of the condition for being special Lagrangian In order for L=H ·ℓ to be a special Lagrange submanifold in the Taub-NUT manifold M, the curve ℓ=(u, u̅, z, z̅) must satisfy the following conditions: Re(u) ≠constant , and x = constant, Re(z) = constant. We would like to give curves ℓ satisfying (<ref>) and (<ref>). In what follows, we rewrite (<ref>) in terms of special coordinates (r,θ,ϕ,ψ), and then obtain some solution curves of (<ref>) in the (ϕ, r)-plane and the (ϕ, θ)-plane, respectively. We define: r sinθcosϕ = z + z̅, r sinθsinϕ = -i(z - z̅), r cosθ = x, -2mψ = Im(u) . (0 ≤θ≤π,  0≤ϕ≤ 2π,  0≤ψ≤ 4π) Combined with (<ref>), the holomorphic coordinates z and u are expressed as follows: z = 12rsinθ e^iϕ, u = -2mi ψ -2rcosθh - 2m log1+cosθsinθ . Hence (<ref>) and (<ref>) are rewritten as: rcosθh + m log1+cosθsinθ≠constant , and r cosθ = c_1 , 12r sinθcosϕ = c_2 , where c_1 and c_2 are arbitrary constants. Note that no conditions are imposed on ψ for L=H ·ℓ to be a special Lagrangian submanifold in the Taub-NUT manifold. Case 1: The solution curves of (<ref>) in the (ϕ,r)-plane In order to eliminate θ from (<ref>), we rewrite the first equation in (<ref>) as follows: r^2 sin^2 θ = r^2 - c_1^2. Then, from the second equation in (<ref>) we get 14(r^2-c_1^2)cos^2ϕ=c_2^2 , so that we have cosϕ = 2c_2/√(r^2 - c_1^2). By giving constant values c_1 and c_2, we obtain the solution curves of (<ref>) in the (ϕ, r)-plane. In particular, such solution curves satisfy ϕ→π2 , 32π (as r→∞) . In Figure <ref>, we show the solution curves obtained from the specific values of c_1 and c_2. The left part of this figure shows the solution curves for c_1 = 1, …, 5 and c_2 = 1/2, and the right part shows the solution curves for c_1 = 1 and c_2 = 1/5, 2/5, …, 1. Then, for each solution curve ℓ, the Lagrangian submanifold L = H ·ℓ is a special Lagrangian submanifold in the Taub-NUT manifold M. Case 2: The solution curves in the (ϕ,θ)-plane Similar calculations as in Case 1 allow us to eliminate r from (<ref>). Namely, we obtain cosϕ =2c_2c_1·1tanθ=ctanθ , where c=2c_2/c_1. Thus, for each c, we have the solution curves of (<ref>) in the (ϕ, θ)-plane. In Figure <ref>, each curve corresponds to the solution curves of (<ref>) for c=1,2,…,5, which represent special Lagrangian submanifolds in the Taub-NUT manifold M. §.§ The non-tri-holomorphic SO(3) isometry §.§.§ Deriving the conditions for being special Lagrangian We consider the vector field X=-2i( z∂∂ z-z̅∂∂z̅) , which generates a one-parameter transformation group H≅ SO(2) that acts Hamiltonianly on the Taub-NUT manifold. In fact, the following function μ gives the corresponding moment map (up to constant): μ =2mr+12h· 4|z|^2 . A similar argument in (<ref>) yields (<ref>), that is, ι_Xω=dμ holds. Indeed, by direct calculations, we have ι_Xω =K_uz̅z̅du+K_zu̅zdu̅+K_zz̅(z̅dz+zdz̅) . On the other hand, by differentiating the both side of (<ref>), we have dμ =2m dr+12h(4z̅dz+4zdz̅) . By differentiating the both sides of the second equation in (<ref>), we get dr=xrdx +2z̅rdz +2zrdz̅ . From (<ref>)–(<ref>), we have dx = -12V^-1du -12V^-1du̅+mxr|z|^2V^-1z̅dz +mxr|z|^2V^-1zdz̅ . Substituting this into (<ref>), we have dr =-12xrV^-1du-12xrV^-1du̅ +(mx^2r^2|z|^2V^-1+2r)z̅dz +(mx^2r^2|z|^2V^-1+2r)zdz̅ . Thus, we obtain dμ = -mxrV^-1du-mxrV^-1du̅ +(2m^2x^2r^2|z|^2V^-1+4mr+2h)(z̅dz+zdz̅) =K_uz̅z̅du+K_zu̅zdu̅+K_zz̅(z̅dz+zdz̅) , which is equal to ι_Xω as shown in (<ref>). Following Subsection <ref>, we give special Lagrangian submanifolds with cohomogeneity-one symmetry H. We first consider a curve ℓ in the Taub-NUT manifold M, ℓ:I → M; t↦ℓ(t)=(u(t),u̅(t),z(t),z̅(t)) . We set L=H·ℓ and define v_1,v_2∈ T_ℓ(t)L as follows: v_1 = X_ℓ(t)=-2i(z∂∂ z-z̅∂∂z̅) , v_2 = ℓ̇(t) =u̇∂∂ u+u̇̅̇∂∂u̅+ż∂∂ z+ż̅̇∂∂z̅ . In order to guarantee that ℓ intersects transversally the H-orbit H·ℓ(t) at ℓ(t) for each t, {v_1,v_2} must satisfy |z(t)| ≠constant . From (<ref>), for any constant value c∈ℝ≅ Z(𝔰𝔬(2)^*), if 2mr+12h· 4|z|^2 = c , then L becomes a Lagrangian submanifold in M. Next, we rewrite the condition (<ref>) for L to be a special Lagrangian submanifold in M. We calculate Ω(v_1, v_2): Ω(v_1,v_2) =du∧ dz(v_1,v_2) =2iz(t)u̇(t) . Hence (<ref>) with θ=0 is equivalent to Re(z(t)u̇(t))=0 . From the above argument, we have obtained the conditions for ℓ such that L=H·ℓ is a special Lagrangian submanifold in the Taub-NUT manifold M. §.§.§ Analysis of the condition for being special Lagrangian In a similar manner in Subsection <ref>, we give some solution curves ℓ for 2mr+12h· 4|z|^2=constant , Re(zu̇)=0 . The first equation in (<ref>) can be expressed by the spherical coordinates (<ref>) as follows: 2mr+12hr^2sin^2θ=c_1 , where c_1 is a constant. On the other hand, it is difficult to solve the second equation in (<ref>) in general. In what follows, we restrict our considerations to the case where u̇(t) is a constant: u̇(t)=c_2. (c_2: constant) Case 1: c_2 is a real number In this case, u takes real valued. Hence, we have ψ=0 , by the fourth equation in (<ref>). Also, the second equation in (<ref>) yields Re(z)=0 . By using (<ref>) this equation is rewritten as follows: rsinθcosϕ=0 . Therefore, we get θ=0,π or ϕ=π2,32π . Substituting this into (<ref>) we have 2mr=c_1 (θ=0 , π) , 2mr+12hr^2sin^2θ=c_1 (ϕ=π2 , 32π) . From the above arguments, we obtain special Lagrangian submanifolds L=H·ℓ for the solution curves ℓ(t)=(r(t),θ(t),ϕ(t),ψ(t)), ℓ(t)=(c_1/2m,0,ϕ(t),0) , (c_1/2m,π,ϕ(t),0) , or ℓ(t)=(r(t),θ(t),π2,0) , (r(t),θ(t),32π,0) , where r(t) and θ(t) satisfy the equation 2mr+12hr^2sin^2θ=c_1 . Here, in the case when m=h=1, Figure <ref> shows the solution curves of (<ref>) in the (r,θ)-plane for c_1=1,…,10. Case 2: c_2 is a pure imaginary number We set c_2=ic for some real constant c. Then, we get u(t)=ict by (<ref>), so that the fourth equation of (<ref>) yields ψ=-(c/2m)t. The second equation in (<ref>) also becomes Im z=0 , by (<ref>), we get rsinθsinϕ=0 . A similar argument in Case 1 shows that L=H·ℓ gives special Lagrangian submanifolds for the solution curves, ℓ(t)=(c_1/2m,0,ϕ(t),-(c/2m)t) , (c_1/2m,π,ϕ(t),-(c/2m)t) , or ℓ(t)=(r(t),θ(t),0,-(c/2m)t) , (r(t),θ(t),π,-(c/2m)t) , where r(t) and θ(t) satisfy (<ref>). We can find the solution curves of (<ref>) in the (r,θ)-plane for each c_1. § CONSTRUCTION OF SPECIAL LAGRANGIAN SUBMANIFOLDS IN THE ATIYAH-HITCHIN MANIFOLD In this section, we construct special Lagrangian submanifolds in Atiyah-Hitchin manifold according to Subsections <ref> and <ref>. §.§ Deriving the conditions for being special Lagrangian Let M be the Atiyah-Hitchin manifold. We denote by ω its Kähler form on M. In terms of the holomorphic coordinates (U, Z) defined in (<ref>), the holomorphic volume form Ω is expressed as Ω=du∧ dz=dU∧ dZ. Then, we have verified that ω and Ω satisfy the Calabi-Yau condition (<ref>) in Subsection <ref>. We consider the vector field X=-2i( Z∂∂ Z-Z̅∂∂Z̅) , which generates one parameter transformation group H≅ SO(2) and acts Hamiltonianly on the Atiyah-Hitchin manifold. As shown in the case of the Taub-NUT manifold, the corresponding moment map μ is given (up to constant) as follows: μ=-4η_1-2(x_++x_-)ω_1 . Indeed, it is sufficient to show ι_Xω_AH=dμ by verifying that the components of the both sides are the same. A direct calculation shows ι_Xω = K_UZ̅Z̅dU +K_ZU̅ZdU̅ +K_ZZ̅Z̅dZ +K_ZZ̅ZdZ̅ . On the other hand, since (<ref>) yields dω_1=0 , we get dμ=-4dη_1-2(dx_++dx_-)ω_1 . We can read off the components dη_1 and dx_± by using (<ref>) and (<ref>), respectively. (The dU-component of dμ) =-A_--A_+2(A_-B_+-A_+B_-)--B_++B_-A_-B_+-A_+B_-ω_1 =K_UZ̅Z̅ Here, we have used (<ref>) in the second equality. By using (<ref>) and (<ref>), it can be verified that (The dU̅-component of dμ) = K_ZU̅Z , (The dZ-component of dμ) = K_ZZ̅Z̅ , (The dZ̅-component of dμ) = K_ZZ̅Z , so that ι_Xω_AH=dμ holds. Following Subsection <ref>, we give special Lagrangian submanifolds with cohomogeneity-one symmetry H in the Atiyah-Hitchin manifold M. We first consider a curve ℓ in the Atiyah-Hitchin manifold M, ℓ:I → M; t↦ℓ(t)=(U(t),U̅(t),Z(t),Z̅(t)) . We set L=H·ℓ and define v_1,v_2∈ T_ℓ(t)L as follows: v_1 = X_ℓ(t)=-2i(Z∂∂ Z-Z̅∂∂Z̅) , v_2 = ℓ̇(t)= U̇∂∂ U +U̇̅̇∂∂U̅ +Ż∂∂ Z +Ż̇∂∂Ż̅̇ . In order to guarantee that ℓ intersects transversally the H-orbit H·ℓ(t) at ℓ(t) for each t, {v_1,v_2} must satisfy |Z(t)| ≠constant . From (<ref>), for any constant value c∈ℝ≅ Z(𝔰𝔬(2)^*), if -4η_1-2(x_++x_-)ω_1 = c , then L becomes a Lagrangian submanifold in M. Next, we rewrite the condition (<ref>) for L to be a special Lagrangian submanifold in M. We have Ω(v_1, v_2)=2iZ(t)U̇(t) . Hence (<ref>) with θ=0 is equivalent to Re(Z(t)U̇(t))=0 . From the above argument we have obtained the conditions for ℓ such that L=H·ℓ is a special Lagrangian submanifold in the Atiyah-Hitchin manifold M. Here, under the variable transformation (<ref>), (<ref>) is rewritten as: Re(2u̇(t)z(t)+u(t)ż(t))=0 . §.§ Analysis of the conditions for being special Lagrangian In order for the Lagrangian submanifold L=H·ℓ to be a special Lagrange submanifold in the Atiyah-Hitchin manifold M, the curve ℓ=ℓ(t) must satisfy the following conditions: |Z|≠constant and -4η_1-2(x_++x_-)ω_1=constant , Re(ZU̇)=0 . It is difficult to solve this equation for the curve ℓ in general. For simplicity, we give some special solutions to (<ref>). We first express (<ref>) in terms of the spherical coordinates. Here, the representations of z,v and x by the spherical coordinates (k,θ,ϕ,ψ) are given by <cit.> z =2e^2iϕ(cos2ψ(1+cos^2θ)+2isin2ψcosθ +(2k^2-1)sin^2θ)K^2(k) , v =8e^iϕsinθ(sin2ψ -icos2ψcosθ+i(2k^2-1)cosθ)K^2(k) , x =4(-3cos2ψsin^2θ+(2k^2-1)(1-3cos^2θ)) K^2(k) , where the ranges for θ, ϕ, ψ are the same ones in (<ref>). By (<ref>) and the first equation of (<ref>), we get ∂ F∂ x =-1h+4√(ρ)K(k) , from which the second equation of (<ref>) yields ρ=16h^2K^2(k) . Then, η_1 is expressed as -4η_1 =4√(ρ){ e_1K(k)-ρ E(k) } =1hK(k){ -16h^23(k^2-2)K^3(k)-16h^2K^2(k)E(k) } =-16hK(k){13(k^2-2)K(k)+E(k) } . Here, we have used <cit.> in the first equality. From (<ref>), we have x_± = K^2(k)3[-12cos2ψsin^2θ+4(2k^2-1)(1-3cos^2θ) ±12{(cos2ψ(1+cos^2θ)+(2k^2-1)sin^2θ)^2+4sin^22ψcos^2θ}^1/2] , so that we find -2(x_++x_-)ω_1 =43hK^2(k){ 3cos2ψsin^2θ-(2k^2-1)(1-3cos^2θ) } . Therefore, the first equation of (<ref>) is expressed as -16hK(k){13(k^2-2)K(k)+E(k) } + 43hK^2(k){ 3cos2ψsin^2θ-(2k^2-1)(1-3cos^2θ) }=c_1 , where c_1 is a constant. In what follows, we restrict our considerations to the case where U̇(t) is a real constant: U̇(t)=c_2 (c_2: constant) . This can be expressed by using the holomorphic coordinate (u,z) as u√(z)=c_2t+c_3 . Then, the second equation of (<ref>) is rewritten as: Re√(z)=0 . By using the spherical coordinates as in (<ref>), (<ref>) is also rewritten as e^iϕK(k){cos2ψ(1+cos^2θ)+(2k^2-1)sin^2θ+2isin2ψcosθ}^1/2 +e^-iϕK(k){cos2ψ(1+cos^2θ)+(2k^2-1)sin^2θ-2isin2ψcosθ}^1/2=0 . We are now ready to solve the conditions (<ref>) written by the spherical coordinates, namely, (<ref>) and (<ref>). We derive curve solutions satisfying these conditions. We consider two cases. In the first case, we eliminate ψ by substituting (<ref>) into (<ref>) and so two conditions reduce to one condition. We show the solution curves of that condition in (θ,ϕ)-plane with fixing k. In the second case, we plot the solution curves for the same condition in (θ, k)-plane with fixing ϕ. ψ can be eliminated by solving (<ref>) with respect to cos2ψ: cos2ψ =13sin^2θ{ (2k^2-1)(1-3cos^2θ) +3h4K^2(k)( c_1+16hK(k)( 13(k^2-2)K(k)+E(k) ) ) } . Case 1: Solution curves in (θ,ϕ)-plane We can eliminate ψ from (<ref>) by using (<ref>). Figure <ref> shows solution curves for the obtained equation with respect to (θ,ϕ) with h=1, k=0.3, 0.5, 0.7 and c_1=0, ±1, …, ±10. Case 2: Solution curves in (θ,k) We make use the same equation in Case 1. Then, Figure <ref> shows solution curves for the equation with respect to (θ,k) with ϕ=π/6, π/4, π/3, c_1=0, ±1, …, ±10 and h=1. § SUMMARY AND CONCLUSION Calibrated submanifolds have been actively studied in both mathematics and physics since their introduction by Harvey and Lawson <cit.>. As one class of calibrated submanifolds, this paper have discussed special Lagrangian submanifolds. Especially, we have focused on the construction of special Lagrangian submanifolds in the Taub-NUT manifold and the Atiyah-Hitchin manifold. In order to construct them, we have combined the generalized Legendre transform approach and the moment map technique. We have seen that the generalized Legendre transform approach naturally gives the description of their Calabi-Yau structures in terms of holomorphic coordinates, which are useful to employ the moment map technique and to describe the conditions for submanifolds to be special Lagrangian submanifolds. We have reviewed the necessary tools for the moment map technique such as the Kähler metrics of the Taub-NUT manifold and the Atiyah-Hitchin manifold which are obtained in the generalized Legendre transform approach. While they are derived from the F-functions given in <cit.> and <cit.>, respectively, their detailed derivation was especially necessary for the Atiyah-Hitchin manifold. That has been worked out in our previous paper <cit.> and we have reviewed this in Subsection <ref>. In this paper, we have explained the moment map technique including the underlying concepts like Hamiltonian actions and cohomogeneity-one actions. By the moment map technique, we have derived the condition for a submanifold L which is obtained as a one-parameter family of the orbits for an Hamiltonian action to be special Lagrangian. In the derivation, we have evaluated the holomorphic volume form of such submanifolds which are readily obtained in the generalized Legendre transform approach. By applying the moment map technique to the Taub-NUT manifold and the Atiyah-Hitchin manifold, we have derived conditions for L to be special Lagrangian which are described by ODEs with respect to the one parameter. These ODEs are difficult to solve by quadrature methods in general. We have solved the ODEs numerically and have plotted solutions curves, which correspond to the cohomogeneity-one special Lagrangian submanifolds in the above manifolds. §.§.§ Acknowledgements This work is supported in part by JSPS Grant-in-Aid for Scientific Research KAKENHI Grant No. JP21K03565 (M. A.). 99 HL F. R. Harvey and H. B. Lawson, “Calibrated geometries," Acta Math. 148 (1982), 47-157. HL2 F. R. Harvey, Spinors and calibrations, Academic Press, 1990. HL3 F. R. Harvey and M. L. Michelsohn, Spin geometry, Princeton University Press, 1989. Figueroa-OFarrill:1998kci J. M. Figueroa-O'Farrill, “Intersecting brane geometries,” J. Geom. Phys. 35 (2000), 99-125 [arXiv:hep-th/9806040 [hep-th]]. SYZ A. Strominger, S. T. Yau and E. Zaslow, “Mirror symmetry is T-duality," Nucl. Phys. B 479 (1996) 243-259. mirror2 D. R. Morrison, “The Geometry underlying mirror symmetry,” [arXiv:alg-geom/9608006 [math.AG]]. ENOOST M. Eto, Y. Isozumi, M. Nitta, K. Ohashi, K. Ohta, N. Sakai and Y. Tachikawa, “Global structure of moduli space for BPS walls,” Phys. Rev. D 71 (2005), 105009 [arXiv:hep-th/0503033 [hep-th]]. joyce D. Joyce, “Special Lagrangian m-folds in ℂ^m with symmetries," Duke Math. J. 115 (2002) 1-51. stenzel1 M. B. Stenzel, “Kähler Structures on the cotangent bundles of real analytic riemannian manifolds," Ph.D. Thesis, Massachusetts Institute of Technology, 1990. stenzel2 M. B. Stenzel, “Ricci-flat metrics on the complexification of a compact rank one symmetric space," Manuscripta Math. 80 (1993), no. 2, 151-163. anciaux H. Anciaux, “Special Lagrangian submanifolds in the complex sphere," Ann. Fac. Sci. Toulouse Math. (6), 16, (2007), no. 2, 215-227. IM M. Ionel and M. Min-Oo, “Cohomogeneity one special Lagrangian 3-folds in the deformed and the resolved conifolds," Illinois J. Math. 52 (2008), no. 3, 839-865. HS K. Hashimoto and T. Sakai, “Cohomogeneity one special Lagrangian submanifolds in the cotangent bundle of the sphere," Tohoku Math. J. (2) 64 (2012), no. 1, 141-169. HM K. Hashimoto and K. Mashimo, “Special Lagrangian submanifolds invariant under the isotropy action of symmetric spaces of rank two," J. Math Soc. Japan 68 (2016), 839-862. AB1 M. Arai and K. Baba, “Special Lagrangian Submanifolds and Cohomogeneity One Actions on the Complex Projective Space," Tokyo J. Math. 42 (2019), 255-284. Koike N. Koike, “Calabi-Yau structures and special Lagrangian submanifolds of complexified symmetric spaces," Illinois Journal of Mathematics 63 (2019), 575-600. Eguchi:1978gw T. Eguchi and A. J. Hanson, “Selfdual Solutions to Euclidean Gravity,” Annals Phys. 120 (1979), 82. Gibbons:1979xm G. W. Gibbons and S. W. Hawking, “Classification of Gravitational Instanton Symmetries,” Commun. Math. Phys. 66 (1979), 291-310. Taub:1950ez A. H. Taub, “Empty space-times admitting a three parameter group of motions,” Annals Math. 53 (1951), 472-490. Newman:1963yy E. Newman, L. Tamburino and T. Unti, “Empty space generalization of the Schwarzschild metric,” J. Math. Phys. 4 (1963), 915. Atiyah:1988jp M. F. Atiyah and N. J. Hitchin, “The geometry and dynamics of magnetic monopoles,” Princeton University Press, Princeton, NJ, 1988. Noda T. Noda, “A special Lagrangian fibration in the Taub-NUT space," J. Math. Soc. Japan 60(3), (2008) 653-663. Lindstrom:1983rt U. Lindstrom and M. Rocek, “Scalar Tensor Duality and N=1, N=2 Nonlinear Sigma Models,” Nucl. Phys. B 222 (1983), 285-308. Hitchin:1986ea N. J. Hitchin, A. Karlhede, U. Lindstrom and M. Rocek, “Hyperkahler Metrics and Supersymmetry,” Commun. Math. Phys. 108 (1987), 535. Karlhede:1986mg A. Karlhede, U. Lindstrom and M. Rocek, “Hyperkahler Manifolds and Nonlinear Supermultiplets,” Commun. Math. Phys. 108 (1987), 529. Lindstrom:1987ks U. Lindstrom and M. Rocek, “New Hyperkahler Metrics and New Supermultiplets,” Commun. Math. Phys. 115 (1988), 21. IR I. T. Ivanov and M. Rocek, “Supersymmetric sigma models, twistors, and the Atiyah-Hitchin metric,” Commun. Math. Phys. 182 (1996) 291 [hep-th/9512075]. Ionas1 R. A. Ionas, “Elliptic constructions of hyperkaehler metrics. I. The Atiyah-Hitchin manifold,” [arXiv:0712.3598 [math.DG]]. Ionas2 R. A. Ionas, “Elliptic constructions of hyperkahler metrics. III. Gravitons and Poncelet polygons,” arXiv:0712.3601 [math.DG]. Arai:2022xyc M. Arai, K. Baba and R. A. Ionas, “Revisiting Atiyah–Hitchin manifold in the generalized Legendre transform,” PTEP 2023 (2023) no.6, 063A03 [arXiv:2206.02420 [hep-th]]. Bielawski R. Bielawski, “Line bundles on spectral curves and the generalised Legendre transform construction of hyperkähler metrics,” J. Geom. Phys. 59 (2009), 374–390. Eguchi:1980jx T. Eguchi, P. B. Gilkey and A. J. Hanson, “Gravitation, Gauge Theories and Differential Geometry,” Phys. Rept. 66 (1980), 213.
http://arxiv.org/abs/2405.09677v1
20240515193847
Homogenization of non-local energies on disconnected sets
[ "Andrea Braides", "Sergio Scalabrino", "Chiara Trifone" ]
math.AP
[ "math.AP", "math.OC", "35B27, 74Q05, 49J45, 26A33, 74A70" ]
[ * May 20, 2024 ================ Preprint SISSA 09/2024/MATE ε We consider the problem of the homogenization of non-local quadratic energies defined on δ-periodic disconnected sets defined by a double integral, depending on a kernel concentrated at scale . For kernels with unbounded support we show that we may have three regimes: (i) <<δ, for which the Γ-limit even in the strong topology of L^2 is 0; (ii) /δ→κ, in which the energies are coercive with respect to a convergence of interpolated functions, and the limit is governed by a non-local homogenization formula parameterized by κ; (iii) δ<<, for which the Γ-limit is computed with respect to a coarse-grained convergence and exhibits a separation-of-scales effect; namely, it is the same as the one obtained by formally first letting δ→ 0 (which turns out to be a pointwise weak limit, thanks to an iterated use of Jensen's inequality), and then, noting that the outcome is a nonlocal energy studied by Bourgain, Brezis and Mironescu, letting →0. A slightly more complex description is necessary for case (ii) if the kernel is compactly supported. Keywords: homogenization, non-local functionals, perforated domains, separation of scales, Gamma-convergence. MSC Class (2020): 35B27, 74Q05, 49J45, 26A33, 74A70 § INTRODUCTION In this paper we study the asymptotic behaviour of a periodic system of disconnected regions that interact through long-range potentials and can therefore be considered `energetically' connected in the sense of V.V. Zhikov's p-connectedness <cit.>. The study of such a type of geometrical objects falls within the analysis of those commonly referred to as `perforated domains', where the domain of integration is a portion of a scaled periodic set E contained in a given region; that is, a set of the form Ω∩δ E, with δ>0 a small parameter. The `classical' case is obtained by choosing as energy e.g. the Dirichlet integral, for which the functionals take the form F_δ(u)=∫_Ω∩δ E|∇ u|^2 dx. If E is a connected open set, then the energies possess a limit, e.g. in the sense of Γ-convergence <cit.>, as δ→ 0 <cit.> and it is a coercive homogeneous quadratic form on H^1(Ω). This result can be obtained by using Extension Lemmas, which allow to construct uniformly continuous operators from H^1(Ω∩δ E) to H^1_ loc(Ω) <cit.>, and then regard all the functionals as defined in that common space. For less regular sets it is convenient to substitute the topological notion of connectedness with a more analytic one: loosely speaking, if p>1, a set A is p-connected if every function u such that ∫_A|∇ u|^pdx=0 is a constant <cit.>. More in general this notion can be given for integrals with respect to a measure, of which the restriction of the Lebesgue measure to A is a particular case. Following a seminal result of Bourgain et al. <cit.> (see also <cit.>, and <cit.> for applications) the Dirichlet integral on an open set U⊂ℝ^d can itself be approximated by introducing another parameter >0 and considering energies F^(u)=1/^d∫_U× Uφ(x-y)|u(x)-u(y)|^2/|x-y|^2 dx dy, with φ a non-negative integrable kernel. Such energies have also been systematically studied within a variational theory for convolution-type functionals <cit.> in the equivalent form F^(u)=1/^d+2∫_U× Uφ(x-y)|u(x)-u(y)|^2 dx dy, up to using the kernel φ(ξ)|ξ|^2 in the place of φ(ξ). We will use this latter form. Note that if the support of φ is unbounded then any set U is in a sense `2-connected' for such functionals, in the sense that if F^(u)=0 then u(x)-u(y)=0 for each x,y∈ U so that u is constant on U. This observation suggests that, contrary to the local case, for non-local energies we might study the behaviour of perforated domains as above even when E is not topologically connected. In this paper, we consider non-local functionals of the form F_,δ(u)=1/^d+2∫_Ω∩δ E∫_Ω∩δ Eφ(x-y)|u(x)-u(y)|^2 dx dy, under the prototypical assumption that E= K+ℤ^d is composed of a periodic array of disconnected sets; that is, K is a (topologically) connected compact set and (K+k)∩ (K+k')=∅ if k,k'∈ℤ^d and k≠ k'. The behaviour of F_,δ will be driven by the mutual behaviour of the two parameters. In the notation, we will tacitly suppose that δ=δ_ is an infinitesimal family as → 0. Functionals of the form (<ref>) have been dealt with in <cit.> (see also <cit.>) when E is a topologically connected periodic Lipschitz set, using an extension theorem which cannot be applied in our cases. The first issue that we examine for such energies is their coerciveness. Since the domain of F_,δ is composed of functions defined on a disconnected set, we have to specify the type of convergence with respect to which they are studied. In the case of φ with support the whole ℝ^d we obtain the three cases: (i) <<δ. In this case we have a loss of coerciveness, and in particular any function in L^2(Ω) is a limit in L^2(Ω) of a sequence of functions with vanishing energy; (ii) ∼δ. In this case we consider the convergence u_→ u defined as the L^2_ loc (Ω) strong convergence of the piecewise-constant functions u^ defined by u^(x)=1/|K|δ^d∫_δ K+δ ku_(y) dy for x∈δ( k+ [0,1)^d). The functionals are equicoercive with respect to this convergence, and the limit u belongs to H^1(Ω); (iii) δ<<. In this case the functions u_ must be `coarse grained', considering the strong L^2_ loc (Ω)-convergence of the piecewise-constant interpolations u^ defined by u^(x)=1/|[0,]^d∩δ E |∫_( k+ [0,1)^d)∩δ Eu_(x) for x∈( k+ [0,1)^d). Also in this case, the limit u is in H^1(Ω). If the support of φ is bounded then we may have loss of coerciveness also if ∼δ. More precisely, if the support of φ is the closed ball of radius s, then the functionals are equicoercive with respect to the convergence above if and only if s>δ D, where D=inf{r: E+B_r/2 is topologically connected}. In this case again the limit u belongs to H^1(Ω). We also compute the Γ-limit with respect to the convergences above. If the support of φ is the whole ℝ^d we have: (i) (degenerate limit) if <<δ then the Γ-limit with respect to the strong L^2(Ω)-convergence is identically 0; (ii) (homogenization) if /δ→κ, then the Γ-limit is the quadratic form F^κ(u)=∫_Ω⟨ A^κ_ hom∇ u, ∇ u⟩ dx, where the symmetric matrix A^κ_ hom satisfies ⟨ A^κ_ homξ,ξ⟩= min{∫_[0,1]^d∩ E∫_Eφ(x-yκ)(⟨ξ, x-y⟩+ u(x)-u(y))^2 dx dy: u 1-periodic}. This result can be seen as following from the results in <cit.> (see also <cit.>) using the techniques therein combined with the coerciveness result above; (iii) (separation of scales) if δ<< and φ is radially symmetric, then the Γ-limit is given by F^∞(u)=C_∞∫_Ω |∇ u|^2dx, where C_∞= |K|^21/d∫_ℝ^dφ(ξ)|ξ|^2dξ. Note that 1/d∫_ℝ^dφ(ξ)|ξ|^2dξ is the constant appearing in the Γ-limit by Bourgain et al. <cit.>, so that this limit can be obtained by first letting δ→ 0, noting that, upon writing F_,δ(u)=1/^d+2∫_Ω×Ωχ_E× E(xδ,yδ)φ(x-y)|u(x)-u(y)|^2 dx dy, the corresponding Γ-limit is simply F^∞_(u)=1/^d+2|K|^2∫_Ω×Ωφ(x-y)|u(x)-u(y)|^2 dx dy, whose successive Γ-limit as → 0 is F^∞. This result is obtained using the Fonseca Müller blow-up method combined by a convexity argument for the lower bound. These arguments allow first to reduce to test functions which are δ-periodic perturbations of affine functions, and then to obtain the desired inequality by a double Jensen's inequality. The upper bound is obtained by a direct computation when the target function u is C^2. In this case we can take u_=u, in which case a discretization argument allows to write F_,δ(u) as an approximation of a Riemann integral. As a technical remark, we note that it is sufficient to treat the case φ=χ_B_r since a general φ can be approximated by linear combinations of this type of energies. The lower bound then follows by the superadditivity of the lower limit, while the upper bound is proved by the pointwise convergence on C^2-functions. Finally, for φ with support the closed ball of radius s the computation in (ii) also holds provided sκ>D, so that the functionals are equi-coercive. § NOTATION AND STATEMENT OF THE RESULTS We consider a radial convolution kernel in ℝ^d; that is, a function φ:ℝ^d→ [0,+∞) such that a decreasing function ϕ_0:[0,+∞)→ [0,+∞) exists satisfying φ(ξ)=ϕ_0(|ξ|). We further assume that ∫_ℝ^dφ(ξ)(1+|ξ|^2)dx<+∞, and for each >0 we define the scaled kernel φ_ by φ_(ξ)= 1^dφ(ξ). A simple kernel, which will be used as a comparison for general kernels, is ϕ_(ξ)= ^-d if |ξ| < , 0 elsewhere, obtained with φ_0=χ_[0,1]. We consider a periodic set E⊂ℝ^d, and we fix a bounded domain Ω⊂^d with Lipschitz boundary. For all >0 and δ>0 we define the functional F_δ,:L^2(Ω) → [0,+∞) by F_δ,(u) = 1/^2∫_(Ω∩δ E) × (Ω∩δ E)ϕ_(x-y)|u(x)-u(y)|^2dxdy. Note that if E=ℝ^d then F_δ, is independent of δ and the Γ-limit with respect to the weak and strong convergence in L^2(Ω) of F_(u)= F_δ,(u) has been shown to be equal to C_φ∫_Ω|∇ u|^2dx, where C_φ:= 1/d∫_ℝ^dφ(ξ)|ξ|^2dx. We will consider a set E composed of disconnected components; more precisely, a 1-periodic set E in ℝ^d of the form E=∑_k∈ℤ^d (k+K), where K is the closure of a connected open set with boundary of zero measure, and is such that (k+K)∩ K=∅ if k∈ℤ^d and k≠ 0. We also define D=inf{r: E+B_r/2 is (topologically) connected}. The simplest example of such a geometry is by taking K=B_c the closed ball of center 0 and radius c<1/2 (see Fig <ref>), for which D=1-2c. Note that for this choice of E dist(K, K+k)≥ D for all k∈ℝ^d, which is not the case in general. For example, if we take as K the closed ellipse given by the relation x_1^2/c^2_1+x_2^2/c^2_2≤ 1 with c_2<c_1 in ℝ^2, we have dist(K, K+e_1)= 1-2c_1 and dist(K, K+e_2)= 1-2c_2=D. §.§ Definition of convergence and coerciveness We first consider the cases in which we do not have coerciveness. The first one is in the regime <<δ, for which the Γ-limit, even if computed with respect to the strong L^2(Ω) convergence is 0. This is a consequence of the following result. Let δ=δ_ be such that δ→ 0 and lim_→ 0/δ_=0. then for all u∈ L^2(Ω) there exists a sequence u_ converging strongly to u in L^2(Ω) and such that lim_→ 0 F_δ,(u_)= 0. It is sufficient to prove that the claim holds for a strongly dense subclass in L^2(Ω). We then take a Lipschitz continuous function u, and consider the function u_ equal to the constant u_(x)=u^δ_k=1/δ^d|K|∫_δ(k+K) u(y)dy in δ(k+K), and to u on Ω∖δ E. We have u_→ u in L^2(Ω), and we can estimate F_δ,(u_)≤ C 1/^d+2∫_(Ω×Ω)∩{|x-y|>D_0δ}φ(|x-y|/)|x-y|^2 dx dy, where D_0=min{ dist(K, k+K): k≠ 0}. The change of variables y=x+ξ gives F_δ,(u_)≤ C' ∫_{|ξ|>D_0δ/}φ(ξ)|ξ|^2dξ, with C' depending only on Ω and the Lipschitz constant of u. This latter integral tends to 0 by (<ref>), since δ/→ +∞. In the case δ∼ we havea lack of coerciveness in the case of kernels with compact support. Up to scaling, we can suppose that the support of φ_0 is [0,1]. We then have the following result. Let the support of φ_0 be [0,1], and let D be defined by (<ref>). (a) If δ =δ_ is such that <δ D_0 then for all u∈ L^2(Ω) there exists a sequence with F_δ,(u_)=0 and converging to u strongly in L^2(Ω); (b) If δ =δ_ is such that <δ D then there exists a sequence u_ with F_δ,(u_)=0 and such that u_χ_δ E does not converge weakly in L^2(Ω). Case (a) is dealt with exactly as in the proof of Theorem <ref>. In case (b) we have that the set δ E+B_/2 is composed of infinitely many disconnected components. We may suppose for simplicity that each connected component is not bounded since otherwise we are in case (a), and let k∈ℤ^d be such that δ (k+K) does not belong to the same connected component as δ K. We may set u_(x)= m if x belongs to the same connected component as δ (2m k+K), and for example u_(x)=0 elsewhere. Since <δ D, if φ(|x-y|/)>0 and x,y∈δ E then they belong to the same connected component. This shows that F_δ,(u_)=0. Claim (a) in Proposition <ref> shows that the Γ-limit in the strong L^2 topology is 0. In the second case, if δ D_0<<δ D then the sequence F_δ, retains some form of coerciveness, which gives a non-trivial Γ-limit in the weak L^2 topology. For example, in the case of ellipsoidal sets as in Example <ref>, the domain of the Γ-limit will be functions in L^2(Ω) whose distributional derivative in the x_1-direction is an L^2 function. Since this issue is not central in our discussion we omit the details of this case. The following theorems will be proved in Section <ref>. They involve piecewise-affine interpolations obtained using Kuhn's decomposition <cit.> of cubes of ^d into d! simplexes which are uniquely determined by a permutation of the indices {1,...,d}. Since the actual form of the piecewise-affine interpolations is not relevant in our context we refer e.g. to <cit.> for their use in nonlocal interaction problems. Let Ω be a connected set, let lim_→0/δ= κ, and suppose either that φ_0 has support [0,1] and κ>D, or that the support of φ be the whole ℝ^d. Let u_ be such that sup_ F_δ,(u_)<+∞. Let the functions u_ be defined as the piecewise-affine interpolations of the functions δ k↦ u^δ_k in (<ref>) if k∈ℤ^d and δ (k+K)⊂Ω. Then, up to subsequences and addition of constants u_→ u in L^2_ loc(Ω) for some u∈ H^1(Ω). The second result combines the interpolation argument with `coarse graining'; that is, we consider averages of functions not on the characteristic period δ of the geometry, but on the larger `mesoscopic' scale . Let Ω be a connected set, let lim_→0/δ= +∞. Let u_ be such that sup_ F_δ,(u_)<+∞. Let the functions u_ be defined as the piecewise-affine interpolations of the functions k↦ u^_k defined on ℤ^d by u^_k=1/|δ E∩(k+[0,1]^d)|∫_δ E∩(k+[0,1]^d) u_ dx. Then, up to subsequences and addition of constants, u_→ u in L^2_ loc(Ω) for some u∈ H^1(Ω). §.§ Γ-convergence The compactness results in the previous section allow to consider Γ-convergence with respect to the convergence of the interpolations as defined therein. We can then compute the Γ-limits in the hypotheses of Theorem <ref> and Theorem <ref>, respectively, the degenerate cases being dealt with in Theorem <ref> when <<δ, in which case the Γ-limit is 0 with respect to any topology weaker than the strong L^2(Ω) one, and in Proposition <ref> for the case < δ D, when φ_0 has support [0,1]. Let δ=δ_ satisfy lim_→0/δ= κ, and suppose either that φ_0 has support [0,1] and κ>D, or that the support of φ be the whole ℝ^d. Then the Γ-limit of F_δ, as → 0 with respect to the convergence of the piecewise-affine interpolations as in Theorem <ref> is the quadratic form F^κ(u)=∫_Ω⟨ A^κ_ hom∇ u, ∇ u⟩ dx, where the symmetric matrix A^κ_ hom satisfies ⟨ A^κ_ homξ,ξ⟩= min{∫_[0,1]^d∩ E∫_Eφ(x-yκ)(⟨ξ, x-y⟩+ u(x)-u(y))^2 dx dy: u 1-periodic}. Once the compactness in Theorem <ref> is proved, the proof of the claim follows very closely that of the homogenization theorem for perforated domains in <cit.>, where functionals of the same form as F_δ, are dealt with, with =κδ and E a periodic (topologically) connected Lipschitz domain. We refer to that paper for details. The second convergence result highlights a separation of scales, in which we may formally first let δ tend to 0 and note that the characteristic functions χ_δ E×δ E weakly^* converge to the constant |K|^2 in L^∞(Ω×Ω). Let δ=δ_ satisfy lim_→0/δ= +∞. Then the Γ-limit of F_δ, as → 0 with respect to the convergence of the piecewise-affine interpolations as in Theorem <ref> is F^∞(u)=|K|^2C_φ∫_Ω|∇ u|^2 dx, where C_φ is given by (<ref>) We note that both the convergences described in the theorems above can be restated as a local strong L^2-convergence; namely, that we have lim_→ 0∫_Ω'∩δ E |u_-u|^2dx=0. This convergence has been extensively used by Zhikov <cit.>. To check this claim, we note that by the Poincaré inequality for double integrals, for which we do not need the connectedness of the domain (see e.g. the proof of <cit.>), taking into account the definition in Theorem <ref>. we have ∫_δ E∩(k+[0,1]^d)|u_-u^_k|^2dx≤1/^d∫_(δ E∩(k+[0,1]^d))^2|u_(x)-u_(y)|^2dx dy, so that, summing on k ∫_Ω'∩δ E|u_-u_|^2dx≤^2 C F_δ,(u_), where u_ is the piecewise-constant interpolation of {u^_k}. The claim then follows noting that u_- u_ tends to 0 locally in L^2(Ω). For the definition in Theorem <ref> the argument is the same. As a consequence of this remark, we can suppose that the sequence u_ tends to u locally in L^2(Ω), upon substituting u_ with the function defined by u_ on Ω∩δ E, and u otherwise. § COMPACTNESS In this section we prove Theorems <ref> and <ref>. We make the choice of the kernel φ=χ_B_r, and we may suppose that r=1 up to a scaling argument. With this choice functional (<ref>) becomes F_δ,(u) = 1/^d+2∫_ (Ω∩δ E)^2 {|x-y|}< |u(x)-u(y)|^2dxdy. Note that this ϕ is a lower bound for any other positive kernel satisfying the condition ϕ(ξ) ≥ c>0 for ξ∈ B_r_0, so that it is sufficient to state the compactness result for families of functions u_ such that (<ref>) is bounded. Case 1: δ∼ (Theorem <ref>). With fixed D<κ ' < κ, we can assume /δ>κ'. By definition of D, there exist r>0 and a finite number of vectors k_1,...,k_N ∈^d generating ^d on such that dist(K,K+k_i)<r<κ' for all i∈{1,...,N}. For each i∈{1,...,N}, let z_i ∈∂ K and w_i ∈∂ K + k_i be such that |z_i-w_i|= dist (K,K+k_i). Moreover, set A_i :={x ∈ K : |x-z_i|< 12(κ'-r)} and B_i :={x ∈ K+k_i : |x-w_i|< 12(κ'-r)}. By the assumptions on K, we have |A_i|, |B_i|>0. Moreover, if x ∈δ A_i and y ∈δ B_i 1/δ|x-y|≤ |1δx-z_i| + |z_i - w_i|+|w_i - 1δy | < κ' - r +dist(K,K+k_i) < /δ; that is, δ A_i ×δ B_i ⊆ (δ K ×δ(K+k_i)) ∩{|x-y|<}. With this observation in mind, let u ∈ L^2(Ω) and recall the notation u_k=1/δ^d |K|∫_δ K+δ k u(x)dx. We compute |u_0-u_k_i|^2= |1/δ^d |K|∫_δ K (u(x)-u(x+δ k_i))dx |^2 = |1/δ^d |K| |δ A_i| |δ B_i|∫_δ K×δ A_i×δ B_i (u(x)-u(x+δ k_i)+u(y)-u(y)+u(z)-u(z))dxdydz |^2 ≤ 3 ( |1/δ ^ 2d |K| |A_i|∫_δ K ×δ A_i (u(x)-u(y))dxdy |^2 + |1/δ^ 2d|A_i| |B_i|∫_δ A_i ×δ B_i (u(y)-u(z))dydz |^2 + |1/δ ^ 2d |K| |B_i|∫_δ (K+k_i)×δ B_i (u(z)-u(w))dwdz |^2 ) ≤ C/δ^2d(∫_δ K ×δ A_i |u(x)-u(y)|^2dxdy +∫_δ A_i ×δ B_i |u(y)-u(z)|^2dydz + ∫_δ (K+k_i)×δ B_i |u(z)-u(w)|^2dwdz ) ≤ C/δ^2d(∫_δ K ×δ K |u(x)-u(y)|^2dxdy +∫_δ K ×δ (K+k_i) {|x-y|<} |u(y)-u(z)|^2dydz + ∫_δ (K+k_i)×δ (K+k_i) |u(z)-u(w)|^2dwdz ), where in the first inequality we used the relation (a+b+c)^2 ≤ 3(a^2+b^2+c^2), and in the second one Jensen's inequality and the fact that |δ A_i|=δ^d |A_i| and |δ B_i|=δ^d |B_i|. In the prototypical case K=B_c, we can consider the largest balls fully contained in the region of interaction, as depicted in Fig. <ref> In order to take into account the first and the third integral of the sum above, we need to control long-range interactions with short-range interactions as specified in the following lemma. There exists a constant C depending on the ratio δ/ such that ∫_δ K ×δ K |u(x)-u(y)|^2dxdy ≤ C ∫_δ K ×δ K {|x-y|<} |u(x)-u(y)|^2dxdy The proof follows the one of Lemma 6.1 in <cit.>. Since K is a connected Lipschitz bounded set, there exists _1 ∈(0,/3) such that for every two points η', η”∈δ K there exists a discrete path from η' to η”; that is, a set of points η'=η_0, η_1,…,η_N,η_N+1=η” for which: * |η_j+1-η_j| ≤_1 for j∈{0,1,…,N}; * for all j=1,...,N, the ball B__1(η_j) is contained in δ K (note that the two extremes are excluded); * N ≤ C ⌊δ/⌋ for all η',η”∈δ K. The constant C depends only on diam(K) and its Lipschitz constant; Writing u(x_0)-u(x_N+1)=u(x_0)-u(x_1)+u(x_1)-⋯-u(x_N)+u(x_N)-u(x_N+1), where x_0 ∈δ K ∩ B__1(η'), x_N+1∈δ K ∩ B__1(η”), and x_j is a point in B__1(η_j) for j=1,..,N. By integrating in all variables we get: ∫_(δ K ∩ B__1(η_0) ) × (δ K ∩ B__1(η_N+1)) (u(x_0)-u(x_N+1))^2dx_0 dx_N+1 =(_1)^-Nd∫_B__1(η_1)⋯∫_B__1(η_N)∫_(δ K ∩ B__1(η_0) ) × (δ K ∩ B__1(η_N+1))(u(x_0)-u(x_1)+u(x_1)-⋯ 6cm ⋯ -u(x_N)+u(x_N)-u(x_N+1))^2 dx_0 dx_N+1 dx_1...dx_N ≤ (N+1)(_1)^-Nd∫_δ K ∩ B__1(η_0)⋯∫_δ K ∩ B__1(η_N+1) ∑_j=1^N+1 ( u(x_j)-u(x_j-1) )^2 dx_0⋯ dx_N+1 ≤ (N+1) ∑_j=1^N+1∫_(δ K ∩ B__1(η_j) ) × (δ K ∩ B__1(η_j-1)) (u(x_j)-u(x_j-1))^2dx_j dx_j-1 ≤(C ⌊δ/⌋+1)^2 ∫_δ K ×δ K {|x-y|<} |u(x)-u(y)|^2dxdy, where in the last line by set inclusion we used that for any (x_j, x_j-1) ∈ B__1(η_j) × B__1(η_j-1) holds |x_j-x_j-1|≤ |x_j-η_j|+|η_j-η_j-1|+|η_j-1-x_j-1|≤ 3_1< and that N is at most C ⌊δ/⌋. We can obtain the final estimate by covering δ K with a finite number of balls of radius _1 (this number depends on the ratio δ/ only) and then summing up the previous inequality applied to all possible pairs of these balls. By applying Lemma <ref>, we deduce that there exists a constant C such that, for i=1,…,d, |u_0-u_k_i|^2≤C/δ^2d(∫_δ K ×δ K {|x-y|< } |u(x)-u(y)|^2dxdy +∫_δ K ×δ (K + k_i) {|x-y|}< |u(y)-u(z)|^2dydz +∫_δ (K + k_i) ×δ (K + k_i) {|x-y|<} |u(z)-u(w)|^2dwdz ). Since we need an estimate of δ^d (u_0-u_k_i/δ)^2, we multiply by δ^d-2 and use that C/δ^2dδ^d-2= C/^d+2 (by δ∼) to find δ^d (u_0-u_k_i/δ)^2 ≤C/^d+2∫_(δ K ∪ δ(K+k_i))^2 {|x-y|}< |u(x)-u(y)|^2 dxdy By periodicity it also holds δ^d (u_k-u_k+k_i/δ)^2 ≤C/^d+2∫_(δ (K+k) ∪ δ(K+(k+k_i)))^2 {|x-y|}< |u(x)-u(y)|^2 dxdy. Finally, since each vector of the canonical basis can be expressed as a linear combination of the k_i's with integer coefficients, via triangle inequality we bound δ^d (u_k-u_k+e_i/δ)^2 ≤C/^d+2∫_(δ (K+k) ∪ δ(K+(k+k_i)))^2 {|x-y|}< |u(x)-u(y)|^2 dxdy, and thus, summing over nearest neighbours k,k' ∈^d such that kδ + δ K ⊆Ω, and the same for k', we get ∑_ k,k' |k-k'|=1δ^d(u_k-u_k'/δ)^2 ≤C/^d+2∫_ (Ω∩δ E)^2 {|x-y|}< |u(x)-u(y)|^2dxdy. Therefore, from the equiboundedness of the functional along the sequence u_ we get a uniform bound on the piecewise-affine interpolation u_, as defined in Theorem <ref>, and thus the existence of a converging subsequence in the sense of (<ref>). Case 2: ≫δ (Theorem <ref>). Set C=C(d) := 1/1+√(d), Q=[0,1]^d and 𝒵_ = { k ∈^d: Q_ ^k := C (k + Q) ⊆Ω}. Note that if (x,y) ∈ Q_^k × Q_^k' with |k-k'| ≤ 1; that is x=C (k + w) and y=C(k'+ z) with w,z ∈ Q, it automatically holds |x-y| ≤ C |k-k'| + C |w-z| ≤ C(1+diam(Q)) = . Moreover, for each k ∈ Z_, consider Z_k := {j ∈ℤ^d : δ(j+K) ⊆ Q_^k}= {j ∈ℤ^d : j+K ⊆/δ (k+ Q)}. Note that δ E ∩ Q_^k ⊇⋃_j ∈ Z_kδ(j+K) ∩ Q_^k= ⋃_j ∈ Z_kδ(j+K). Moreover, assume K ⊆ B_L and fix j ∈/δ k + B(0, /2δ-L) and x = j + w ∈ j + K; then for all i∈{1,…,d} we have |x_i - δ k_i | ≤ |x- δ k| ≤ |j - δ k| + L ≤/2δ; that is, j+K ⊆/δ (k+ Q). We deduce that /δ k+ {- ⌊/2δ-L⌋,...,0,..., ⌊/2δ-L⌋}^d ⊆ Z_k and consequently #Z_k ≥ (2(/δ-L-1) + 1) ^d ≥(/δ)^d, provided /2δ≥ L+1. In particular |δ E ∩ Q_^k| ≥∑_j ∈ Z_k |δ (j+K)| =#Z_k |K| δ^d ≥ |K| ^d. We now write F_(u_) =1/^d+2∫_ (Ω∩δ E)^2 {|x-y|}< (u_(x)-u_(y))^2dxdy ≥1/^d+2∑_|k-k'|=1 k,k' ∈𝒵_∫_(Q^k_× Q^k'_) ∩ (δ E)^2 (u_(x) - u_(y))^2 dxdy = 1/^d+2∑_|k-k'|=1 k,k' ∈𝒵_ |δ E ∩ Q_^k||δ E ∩ Q_^k'| _(Q^k_× Q^k'_) ∩ (δ E)^2 (u_(x) - u_(y))^2 dxdy ≥|K|/^d+2∑_|k-k'|=1 k,k' ∈𝒵_^2d| _(Q_^k ∩δ E)× (Q_^k'∩δ E)(u_(x) - u_(y)) dx dy |^2 = |K|/^d+2∑_|k-k'|=1 k,k' ∈𝒵_^2d| u_^k - u_^k'|^2 =|K| ∑_|k-k'|=1 k,k' ∈𝒵_^d |u^k_-u^k'_|^2/^2, with the averages u^k_ defined by u^k_= 1/|Q_^k ∩δ E | ∫_Q_^k ∩δ E u_(x)dx. As in the previous case, we get a uniform bound on the piecewise-affine interpolation u_, as defined in Theorem <ref>, so we can conclude that there exists a subsequence converging in the sense of (<ref>). § ASYMPTOTIC ANALYSIS This section is devoted to the proof of Theorem <ref>, subdividing it in a lower and an upper bound. §.§ Liminf inequality We first prove a lower bound. To that end, let u_→ u in the sense of (<ref>) with F_(u_) ≤ M < +∞. Using the conclusion in Remark <ref> we can suppose that the sequence u_ tends to u locally in L^2(Ω), and actually that u_=u outside δ E. In order to estimate the energy we use Fonseca-Müller's blow-up technique <cit.>, as adapted to homogenization problems in <cit.>. To that end, we restrict x ∈ A ⊂Ω, where A is an open subset, defining F_(u_,A)= 1/^d+2∫_(A×Ω) ∩ (δ E)^2 {|x-y|<}|u_(x)-u_(y)|^2dxdy=μ_(A) Since the measures μ_ are equibounded in mass, we can take a converging subsequence μ_∗⇀μ, so that by the lower semicontinuity of weak^∗ convergence on open sets it holds lim inf_→ 0 F_(u_) ≥μ(Ω) ≥∫_Ωd μ/dx dx. From now on, we denote Q := (-1/2, 1/2)^d and, for all r>0 and x_0 ∈^d, Q_ρ(x_0) := x_0 + ρ Q. We fix a Lebesgue point x_0 ∈Ω for u and ∇ u, such that there exists dμ/dx(x_0)= lim_ρ→ 0μ(Q_ρ(x_0))/ρ^d. Since μ (∂ Q_ρ(x_0) )=0 for almost all ρ >0, we can construct ρ_→ 0, with ρ_≫, such that dμ/dx(x_0)= lim_→ 0μ_(Q_ρ_(x_0)) /ρ_^d. From now on we will tacitly suppose that ρ=ρ_. Now we prove a classical lemma that allows to match boundary data. Let {v_}⊆ L^2(Q) such that v_→ v for some v∈ H^1(Q). For all η >0, there exists another sequence {v^η_}⊆ L^2(Q), such that * v^η_→ v in L^2(Q) * v^η_ =v in the set { z ∈ Q: dist(z,∂ Q) < η} * v^η_ =v_ in the set { z ∈ Q: dist(z,∂ Q) > 2η} * if we define F'_(v_)=1/^d+2∫_(Q× Q) ∩ (δ E)^2 {|z-w|<}(v_(z)-v_(w))^2dz dw, then it holds lim sup_→ 0 (F'_(v^η_)-F'_(v_ ) )≤ o(1), as η→ 0. Fix N ∈ℕ and for k ∈{1,...,N} define Q_k^N := { z ∈ Q dist(z, ∂ Q) > η/N(N+k)} and S_k := Q_k-1^N ∖Q_k^N. Let _k be a cut-off function such that _k=1 in Q_k^N, _k=0 in ^d ∖Q_k-1^N and |∇_k| ≤N/η. Define v_ ^k := _k v_ + (1-_k) v. Note that v_^k (z) - v_^k (w)= _k(z)(v_ (z) - v_ (w)) + (1-_k(z)) (v(z)-v(w) )(_k(z) - _k(w)) (v_(w)-v(w)) for all z, w ∈ Q. We decompose Q × Q as follows: A_1 := Q_k^N × Q_k^N, _k(z)=1, _k(w)=1 ∀ (z,w) ∈ A_1; A_2 := (Q ∖Q_k-1^N)× (Q ∖Q_k-1^N), _k(z)=0, _k(w)=0 ∀ (z,w) ∈ A_2 A_3 := S_k × Q, |∇_k(z)|≤ 1 A_3' := (Q × S_k ) ∖ A_3 A_4 := Q_k^N × (Q ∖Q_k-1^N), |z-w| ≥η/N, ∀ (z,w) ∈ A_2 A_4' := (Q ∖Q_k-1^N)× Q_k^N. The integral in A_1 is estimated by: 1/^d+2∫_A_1 ∩ (δ E)^2 {|z-w|<}(v_^k(z)-v_^k(w))^2dz dw =1/^d+2∫_A_1 ∩ (δ E)^2 {|z-w|<}(v_(z)-v_(w))^2dz dw ≤1/^d+2∫_(Q × Q) ∩ (δ E)^2 {|z-w|<}(v_(z)-v_(w))^2dz dw= F'_(v_) Since Q has a Lipschitz boundary and v ∈ H^1(Q), we can extend it to the whole ^d. Denote the set S_η={ z ∈ Q: dist(z,∂ Q) < 2 η}. We have 1/^d+2∫_A_2 ∩ (δ E)^2 {|z-w|<}(v_^k(z)-v_^k(w))^2dz dw =1/^d+2∫_A_2 ∩ (δ E)^2 {|z-w|<}(v(z)-v(w))^2dz dw ≤1/^d+2∫_ (S_η× Sη) ∩ (δ E)^2 {|z-w|<}(v(z)-v(w))^2dz dw ≤1/^d+2∫_ (S_η× S_η) {|z-w|<}(v(z)-v(w))^2dz dw = ∫_|ξ|<1 |ξ|^2 ∫_S_η,ξ( v(z+ξ)-v(z)/ |ξ|)^2dz dξ , where S_η,ξ= {z ∈ S_η: z+ξ∈ S_η}. Note that, for all |ξ| <1 and small enough, the set S_η,ξ is well included in S_η+B_η(0). Thus we can use a well-known characterization of Sobolev space H^1 to estimate ∫_|ξ|<1 |ξ|^2 ∫_S_η,ξ( v(z+ξ)-v(z)/ |ξ|)^2dz dξ≤∫_|ξ|<1 |ξ|^2 ∇ v _L^2(S_η+B_η(0)) dξ = ∇ v _L^2(S_η+B_η(0)), which tends to 0 as η→ 0 independently of . As for A_3, we have 1/^d+2∫_A_3 ∩ (δ E)^2 {|z-w|<} (v_^k(z)-v_^k(w))^2dz dw =1/^d+2∫_A_3 ∩ (δ E)^2 {|z-w|<}(_k(z)(v_ (z) - v_ (w)) + (1-_k(z)) (v(z)-v(w) ) +(_k(z) - _k(w)) (v_(w)-v(w)))^2dz dw ≤3/^d+2∫_A_3 ∩ (δ E)^2 {|z-w|<}(v_ (z) - v_ (w))^2 dz dw +3/^d+2∫_A_3 ∩ (δ E)^2 {|z-w|<}(v (z) - v(w))^2 dz dw +3/^d+2∫_A_3 ∩ (δ E)^2 {|z-w|<}(_k (z) - _k(w))^2(v_(w)-v(w))^2 dz dw Recalling that A_3=S_k × Q, we estimate (<ref>) with -2cm∫_(S_k × Q) ∩ (δ E)^2 {|z-w|<}(_k (z) - _k(w))^2(v_(w)-v(w))^2 dz dw ≤ N^2/η^2∫_ S_k × Q {|z-w|<}|z - w|^2(v_(w)-v(w))^2 dz dw = N^2/η^2∫_S_k(∫_ Q ∩ B(w, ) |z - w|^2 dz ) (v_(w)-v(w))^2 dw ≤ N^2/η^2∫_S_k^2 |B_| (v_(w)-v(w))^2 dw = ^d+2N^2/η^2∫_S_k (v_(w)-v(w))^2 dw . Note that v_→ v in L^2(Q) by assumption, so the last term goes to 0 as → 0. Moreover, when we sum over k=1,...,N the terms (<ref>) and (<ref>), we find ∑_k=1^N ( 1/^d+2∫_A_3 ∩ (δ E)^2 {|z-w|<}(v_ (z) - v_ (w))^2 dz dw + 1/^d+2∫_A_3 ∩ (δ E)^2 {|z-w|<}(v (z) - v(w))^2 dz dw ) ≤1/^d+2∫_Q^2 ∩ (δ E)^2 {|z-w|<}(v_(z)-v_(w))^2dz dw + 1/^d+2∫_Q^2 ∩ (δ E)^2 {|z-w|<}(v(z)-v(w))^2dz dw. Thus we may find k ∈{1,…,N} such that 1/^d+2∫_A_3 ∩ (δ E)^2 {|z-w|<}(v_ (z) - v_ (w))^2 dz dw + 1/^d+2∫_A_3 ∩ (δ E)^2 {|z-w|<}(v (z) - v(w))^2 dz dw ≤ 1/N(1/^d+2∫_Q^2 ∩ (δ E)^2 {|z-w|<}(v_(z)-v_(w))^2dz dw + 1/^d+2∫_Q^2 ∩ (δ E)^2 {|z-w|<}(v(z)-v(w))^2dz dw). With such a choice of k, and setting v^η_= v^k_, (<ref>) is estimated by 1/^d+2∫_A_3 ∩ (δ E)^2 {|z-w|<}(v_^η(z)-v_^η(w))^2dz dw ≤ 3/N(1/^d+2∫_Q^2 ∩ (δ E)^2 {|z-w|<}(v_(z)-v_(w))^2dz dw + 1/^d+2∫_Q^2 ∩ (δ E)^2 {|z-w|<}(v(z)-v(w))^2dz dw) +3N^2/η^2∫_S_k (v_(w)-v(w))^2 dw= 3/N( F'_(v_) + F'_(v) ) +3N^2/η^2∫_S_k (v_(w)-v(w))^2 dw ≤ C/N + 3N^2/η^2∫_S_k (v_(w)-v(w))^2 dw. The estimate on A_3' is the same with the roles of z,w reversed. As for A_4, as already noticed for (z,w) ∈ A_4 we have |z-w| ≥η/N, so that, for η,N fixed and small enough (indeed we let it go to 0 before arguing on N), η/N<|z-w|< cannot happen simultaneously and thus we simply have 1/^d+2∫_A_4 ∩ (δ E)^2 {|z-w|<}(v_^k(z)-v_^k(w))^2dz dw =0 for small enough. This works the same for A'_4 as well. Finally, gathering all the estimates together, we find lim sup_→ 0 (F'_(v^η_)-F'_(v_ )) ≤C/N +∇ v _L^2(S_η+B_η(0)). Sending N →∞, we prove the claim since ∇ v _L^2(S_η+B_η(0))=ω(η)=o(1) as η→ 0. Recalling (<ref>), we consider a Lebesgue point x_0 and a sequence ρ=ρ_ for which the limit dμ/dx(x_0)= lim_→ 0μ_(Q_ρ(x_0)) /ρ^d. exists, with μ_ is defined as in (<ref>). Let {u_}⊆ L^2(Ω), u_→ u ∈ H^1(Ω), and let η >0 be fixed. There exists ũ_→ u such that * ũ_(x)=u_ (x_0) + ∇ u (x_0)· (x-x_0), for all x ∈ Q_ρ(x_0) ∖ Q_ρ(1-2η)(x_0); * ũ_(x)=u_ (x) for all x ∈ Q_ρ(1-4η)(x_0); * denoting for simplicity F'_(u)= 1/ρ^d^d+2∫_(Q_ρ(x_0) ∩δ E)^2 {|x-y|<}(u(x)-u(y))^2dxdy, it holds lim sup_→ 0 (F'_(u_)-F'_(ũ_)) ≤ o(1) as η→ 0. The proof of this proposition is obtained by applying Lemma <ref> to the difference quotient v_ defined by v_(z) :=u_ (x_0 + ρ z)-u_(x_0)/ρ, z ∈ Q. We may choose ρ=ρ_ such that also v_→∇ u(x_0)· z in L^2(Q). We rewrite the quotient μ_(Q_ρ(x_0)) /ρ^d in terms of the functions v_: μ_(Q_ρ(x_0)) /ρ^d = 1/ρ^d^d+2∫_(Q_ρ(x_0)×Ω) ∩ (δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy ≥ 1/ρ^d^d+2∫_(Q_ρ(x_0)× Q_ρ(x_0)) ∩ (δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy = ρ^d/^d+2∫_(Q× Q) ∩ (δ/ρ E)^2 {|z-w|</ρ}(u_(x_0 + ρ z)-u_(x_0 + ρ w ))^2dz dw = ρ^d+2/^d+2∫_(Q× Q) ∩ (δ/ρ E)^2 {|z-w|</ρ}(u_(x_0 + ρ z)-u_(x_0 + ρ w )/ρ)^2dz dw = ρ^d+2/^d+2∫_(Q× Q) ∩ (δ/ρ E)^2 {|z-w|</ρ}(v_(z)-v_(w))^2dz dw. By Lemma <ref>, there exists v_^η→ v in L^2(Q) which satisfies the properties stated therein, in particular v_^η(z)= ∇ u(x_0)· z in the set { z ∈ Q: dist(z,∂ Q) < η}. By setting ũ_(x_0 + ρ z) := ρ v_^η (z) + u_(x_0) for all z ∈ Q, the function ũ_ satisfies the properties in the statement; in particular, ũ_(x)=u_ (x_0) + ∇ u (x_0)· (x-x_0) in the set {x ∈ Q_ρ(x_0): dist(z,∂ Q_ρ(x_0))< ρη} = Q_ρ(x_0) ∖ Q_ρ(1-2η)(x_0), and by the same change of variable of above to write the functional back in terms of ũ_ it holds lim sup_→ 0( 1/ρ^d^d+2∫_(Q_ρ(x_0) ∩δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy - 1/ρ^d^d+2∫_(Q_ρ(x_0) ∩δ E)^2 {|x-y|<}(ũ_(x)-ũ_(y))^2dxdy ) ≤ o(1), as η→ 0. To ease the notation we still call u_ the sequence which satisfies u_(x)=u_(x_0)+∇ u (x_0) · (x-x_0) for x ∈ Q_ρ(x_0) ∖ Q_ρ(1 - 2η)(x_0) of Proposition <ref>. Now we would like to have ρ/δ∈ℕ, so we take a slightly smaller cube than Q_ρ(x_0) that gives a negligible error at the limit. Set N_ := ⌊1/2(ρ/δ-1) ⌋, l_ := (1+2N_)δ and Q_ := Q_l_ (x_0)=⋃ _k ∈{-N_, ..., 0,...,N_}^d Q_δ (x_0 + δ k), |Q_|=l_^d, In the hypothesis ≫δ and η∼/ρ, we have Q_ρ(1- η) (x_0) ⊆ Q_⊆ Q_ρ(x_0). In particular, we can extend u_ - u_(x_0) -∇ u(x_0)· (· - x_0) periodically of period l_ to the whole ^d. Note that for all x ∈ Q_⊆ Q_ρ(x_0) and y ∈ B(x;), choosing η= 2 /ρ, |y_i - (x_0)_i| ≤ |x_i - (x_0)_i| +|x-y| ≤ρ/2+ = ρ(1 +η)/2 for all i ∈{1,…,d}; that is, y ∈ Q_ρ(1+ η)(x_0). Thus we have (Q_×^d) ∩{|x-y|<}⊂ (Q_× Q_ρ(1+ η)(x_0)) ∩{|x-y|<}. We further split the right-hand side in ((Q_× Q_) ∩{|x-y|<}) ∪ ( (Q_× (Q_ρ(1+ η)(x_0) ∖ Q_)) ∩{|x-y|<} ). For y ∈ Q_ρ(1+ η)(x_0) ∖ Q_⊂ Q_ρ(1+ η)(x_0) ∖ Q_ρ(1- η)(x_0), and x ∈ Q_∩ B(y,) we have ρ(1-η)/2 < |y_i - (x_0)_i| ≤ |x_i - (x_0)_i| + |x-y| < |x_i - (x_0)_i| + = |x_i - (x_0)_i| + ηρ/2 for some index i∈{1,…,d}; that is, x ∈ Q_∖ Q_ρ(1-2η)(x_0) ⊂ Q_ρ(x_0) ∖ Q_ρ(1-2η)(x_0). By Proposition <ref>, it follows that for (x,y) ∈ (Q_× (Q_ρ(1+ η)(x_0) ∖ Q_)) ∩{|x-y|<}, we have simultaneously u_(x)=u(x_0) + ∇ u(x_0)· (x-x_0), u_(y)=u(x_0) + ∇ u(x_0)· (y-x_0), (the latter by periodicity in y). Therefore, by (<ref>), we can estimate 1/ρ^d^d+2 ∫_(Q_×^d) ∩ (δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy ≤1/ρ^d^d+2∫_(Q_× Q_) ∩ (δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy + 1/ρ^d^d+2∫_(Q_× (Q_ρ(1+ η)(x_0) ∖ Q_)) ∩ (δ E)^2 {|x-y|<}(∇ u (x_0) · (x-y))^2dxdy ≤1/ρ^d^d+2∫_(Q_∩δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy +|∇ u(x_0)|^2 ^2/ρ^d ^d+2∫_Q_ρ(1+ η)(x_0) ∖ Q_(∫_Q_∩ B(y,) dx ) dy ≤1/ρ^d^d+2∫_(Q_∩δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy + |∇ u(x_0)|^2 ^2/ρ^d ^d+2 |Q_ρ(1+η)∖ Q_ρ(1-η) | ^d ≤1/ρ^d^d+2∫_(Q_∩δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy + |∇ u(x_0)|^2/ρ^d 4d ρ^d-1 = 1/ρ^d^d+2∫_(Q_∩δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy + C /ρ since |Q_ρ(1+η)∖ Q_ | ≤ |Q_ρ(1+η)∖ Q_ρ(1-η)|= [ρ(1+η)]^d - [ρ(1-η)]^d ≤ 2 (ηρ) d ρ^d-1=4dρ^d-1. We can read the estimate above as 1/ρ^d^d+2∫_(Q_∩δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy ≥1/ρ^d^d+2∫_(Q_×^d) ∩ (δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy - C /ρ and C/ρ=o(1) as → 0. Set w_(z) := u_(x_0 + l_ z)-∇ u (x_0) · l_ z for all z ∈ Q, which is ℤ^d-periodic by our assumption. For simplicity denote M_=1+2N_, so that l_=(1+2N_)δ=M_δ and |Q_|^d=l_^d=M_^dδ^d. We have μ_(Q_ρ(x_0)) /ρ^d= 1/ρ^d^d+2∫_(Q_ρ(x_0)×Ω) ∩ (δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy ≥ 1/ρ^d^d+2∫_(Q_∩δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy ≥ 1/ρ^d^d+2∫_(Q_×^d)∩ (δ E)^2 {|x-y|<}(u_(x)-u_(y))^2dxdy +o(1) = 1/ρ^d^d+2∫_(Q_×^d)∩ (δ E)^2 {|x-y|<}(w_ (x-x_0/l_)-w_ (y-x_0/l_) + ∇ u(x_0)·(x- y ))^2dxdy +o(1) ≥ 1/ρ^d^d+2inf_w ^d-per.{∫_(Q_×^d)∩ (δ E)^2 {|x-y|<}(w (x-x_0/l_)-w (y-x_0/l_) + ∇ u(x_0)·(x- y ))^2dxdy }. We work on the infimum by change of variables, writing inf_w ^d-per.∫_(Q_×^d)∩ (δ E)^2 {|x-y|<}(w (x-x_0/l_)-w (y-x_0/l_) + ∇ u(x_0)·(x- y ))^2dxdy = l_^2dinf_w ^d-per.∫_(Q_1(x_0/l_) ×^d )∩( E/M_)^2 {|x-y|</l_}(w (x-x_0/l_)-w (y-x_0/l_) + ∇ u(x_0)· l_(x- y ))^2dxdy = l_^2d+2inf_w ^d-per.∫_(Q_1(x_0/l_) ×^d )∩( E/M_)^2 {|x-y|</l_}(w(x)-w (y) + ∇ u(x_0)· (x- y ))^2dxdy. The last equality comes from the fact that, for each w 1-periodic, the transformation w̃(x)= l_ w(x+x_0/l_) (which is also 1-periodic) gives the same integral, and the infimum is the same by the one-to-one correspondence. Now we use the periodicity of E to get rid of the center x_0/l_, indeed it holds ∫_(Q_1(x_0/l_) ×^d )∩( E/M_)^2 {|x-y|</l_}(w(x)-w (y) + ∇ u(x_0)· (x- y ))^2dxdy = ∫_(Q_1(x_0/l_) ×^d ) {|x-y|</l_}χ_(E/M_×E/M_) (x,y)(w(x)-w (y) + ∇ u(x_0)· (x- y ))^2dxdy = ∫_(Q_1(0 ) ×^d ) {|x-y|</l_}χ_(E/M_×E/M_) (x,y)(w(x)-w (y) + ∇ u(x_0)· (x- y ))^2dxdy, given that the integrand is jointly 1-periodic in both x and y, thanks to the 1-periodicity of E (and in particular of E/M_), of w and that y runs in ^d (so that, when x and y is changed with x+k and y+k', the change of variable y'=y+k'-k works). Let w be ^d-periodic and define a 1/M_^d-periodic function w̃(x) := 1/M_^d∑_k ∈{ -N_,...,0,...,N_}^d w(x+ k/M_) Via a standard convexity argument, calling ℐ_={ -N_,...,0,...,N_}^d, ∫_(Q_1/M_×^d ) ∩(E/M_)^2 {|x-y|</l_}(w̃(x)-w̃(y) + ∇ u(x_0)· (x- y ))^2dxdy = 1/M_^d∫_(Q_1 ×^d ) ∩(E/M_)^2 {|x-y|</l_}(w̃(x)-w̃(y) + ∇ u(x_0)· (x- y ))^2dxdy = 1/M_^d∫_(Q_1 ×^d ) ∩(E/M_)^2 {|x-y|</l_}( 1/M_^d∑_k ∈ℐ_( w(x+ k/M_) - w(y+ k/M_) + ∇ u(x_0)·(x-y) ) )^2dxdy ≤ 1/M_^2d∑_k ∈ℐ_∫_(Q_1 ×^d ) ∩(E/M_)^2 {|x-y|</l_}( w(x+ k/M_) - w(y+ k/M_) + ∇ u(x_0)·(x-y) )^2dxdy = 1/M_^d∫_(Q_1 ×^d ) ∩(E/M_)^2 {|x-y|</l_}( w(x) - w(y) + ∇ u(x_0)·(x-y) )^2dxdy. Thus by taking the infimum for every w̃ 1/M_^d-periodic on the left-hand side of the last inequality, by the arbitrariness of w 1-periodic, in (<ref>) we get μ_(Q_ρ(x_0))/ρ^d ≥ l_^2d+2/ρ^d ^d+2M_^dinf_w 1/M_^d-per.∫_(Q_1/M_×^d ) ∩(E/M_)^2 {|x-y|</l_}( w(x)-w(y) + ∇ u(x_0)· (x- y ))^2dxdy = l_^d/ρ^d_∼ 1 as → 0δ^d+2/^d+2inf_w 1/M_^d-per.∫_(Q_1/M_×^d ) ∩(E/M_)^2 {|x-y|</l_}M_^2d+2( w(x)-w(y) + ∇ u(x_0)· (x- y ))^2dxdy = δ^d+2/^d+2inf_w 1/M_^d-per.∫_(Q_1 ×^d ) ∩ (E)^2 {|x-y|</δ}( w(x/M_)-w(y/M_) + ∇ u(x_0)· (x- y ))^2dxdy ≥ δ^d+2/^d+2inf_w ^d-per.∫_(Q_1 ×^d ) ∩ (E)^2 {|x-y|</δ}( w(x)-w(y) + ∇ u(x_0)· (x- y ))^2dxdy, where the second equality holds from a change of variable and the same value of the inf with M_ w, and the last inequality comes from the fact that x ↦ w( x/M_) is 1 periodic if w is 1/M_ periodic, and the infimum decreases. Finally, with fixed η >0, for small enough we have η >δ/diam(K), and we set Z_={ k ∈^d: δ/|k| < 1-η}. Note that, when k ∈ Z_ we have |x-y-k| ≤diam(K)+|k| < /δ - /δη +diam(K) < /δ so that (x,y) ∈ (K × E) ∩{|x-y|< /δ}=(K × (K+k)) ∩{|x-y|< /δ}. By a double Jensen's inequality we have inf_w ^d-per.∫_(Q_1 ×^d)∩ (E)^2 {|x-y|< /δ}(w(x)-w(y) . + ∇ u(x_0)·(x- y ))^2dxdy = inf_w ^d-per.∫_ (K × E) ∩{ |x-y|</δ}(w (x)-w (y) + ∇ u(x_0)·(x- y ))^2dxdy ≥inf_w ^d-per.∑_k ∈ Z_∫_ K × (K+k) (w (x)-w (y) + ∇ u(x_0)·(x- y ))^2dxdy ≥inf_w ^d-per.∑_k ∈ Z_|K|^2(1/|K|^2∫_ K × (K+k) w (x)-w (y) + ∇ u(x_0)·(x- y )dxdy)^2 = ∑_k ∈ Z_|K|^2(1/|K|^2∫_ K × K ∇ u(x_0)· k dxdy)^2 = |K^2| ∑_k ∈ Z_( ∇ u(x_0)· k)^2, which is independent of w 1-periodic. Thus μ_(Q_ρ(x_0))/ρ^d≥δ^d+2/^d+2|K^2| ∑_k ∈ Z_( ∇ u(x_0)· k)^2 and lim inf_→ 0μ_(Q_ρ(x_0)) /ρ^d≥ |K^2| lim inf_→ 0∑_k ∈ Z_(δ/)^d ( ∇ u(x_0)·δ/k)^2= |K|^2∫_B_1-η |∇ u(x_0)·ξ|^2 dξ. Using the lower semicontinuity of the mass, lim inf_→ 0 F_(u_) ≥μ(Ω) ≥∫_Ωd μ/dx dx, we get lim inf_→ 0 F_(u_) ≥ |K|^2 ∫_Ω∫_B_1 |∇ u(x) ·ξ|^2 dξ dx= |K|^2 C(d)∫_Ω |∇ u (x)|^2 dx, with C_d := 1/d∫_B_1|ξ|^2 dξ as in <cit.> (for us ϕ(z) was the characteristic function of the unit ball). §.§ Limsup inequality Let u ∈ C^2(Ω) and let η > 0 be fixed. Let Ω⋐Ω' and set Z_δ := {k ∈ℤ^d : Q_δ^k∩Ω≠∅}. For δ≪ 1 we have ⋃_k ∈ Z_δQ_δ^k ⊆{x ∈^d: dist(x, Ω) ≤ 2δ}⊆Ω'. Moreover, without loss of generality we can assume u ∈ C^2(Ω'). By the regularity assumption on u and a weighted Cauchy-Schwarz inequality: |u(x)-u(y)|^2 =|∇ u (x) · (x-y) + R(x,y)|^2 ≤ (1+η) |∇ u (x)· (x-y)|^2 + (1+1/η)|R(x,y)|^2, with |R(x,y)|≤ C|x-y|^2. Moreover, in the assumption ≫δ, it also holds |kδ -k' δ| ≤ |kδ -x| + |k'δ -y| + ≤ + 2δ≤(1+η) for all k, k' ∈ Z_δ and x, y ∈ Q_δ^k × Q_δ^k' such that |x-y|<, which implies F_(u) ≤ (1+η) 1/^d+2∫_(Ω∩δ E)^2 {|x-y|<}|∇ u (x) · (x-y)|^2 dx dy + C_η1/^d+2∫_(Ω∩δ E)^2 {|x-y|<} |x-y|^4 dxdy ≤ (1+η)1/^d+2∑_k,k' ∈ Z_δ∫_(Q_δ^k × Q_δ^k') ∩ (δ E)^2 {|x-y|<} |∇ u (x) · (x-y)|^2 dx dy + C_η^2 ≤ (1+η)1/^d+2∑_k,k' ∈ Z_δ δ|k-k'| < (1+η) ∫_(Q_δ^k × Q_δ^k') ∩ (δ E)^2 |∇ u (x) · (x-y)|^2 dx dy + C_η^2. Note that for all k,k' ∈ Z_δ and for all x= δ k + z and y=δ k' + w, with z,w ∈δ Q, |∇ u(x) · (x-y)| = | ∇ u(x)· (δ k -δ k') + ∇ u(x) · (z-w)| ≤ | ∇ u(x)· (δ k -δ k')| + ∇ u _∞δ ≤ | ∇ u(kδ )· (δ k -δ k')| +D^2 u _∞δ +∇ u _∞δ ≤ | ∇ u(kδ )· (δ k -δ k')| +C δ, so that |∇ u(x) · (x-y)|^2 ≤ (1+η)| ∇ u(kδ )· (δ k -δ k')|^2 + C_ηδ^2. Therefore we have 1/^d+2∑_k,k' ∈ Z_δ δ|k-k'| < (1+η) ∫_(Q_δ^k × Q_δ^k') ∩ (δ E)^2 |∇ u (x) · (x-y)|^2 dx dy ≤ (1+η) 1/^d+2∑_k,k' ∈ Z_δ δ|k-k'| < (1+η) ∫_(Q_δ^k × Q_δ^k') ∩ (δ E)^2 |∇ u (kδ) · (kδ-k'δ)|^2 dx dy + C_η∑_k,k' ∈ Z_δ δ|k-k'| < (1+η) 1/^d+2δ^2dδ^2 = (1+η) 1/^d+2∑_k,k' ∈ Z_δ δ|k-k'| < (1+η) ∫_(Q_δ^k × Q_δ^k') ∩ (δ E)^2 |∇ u (kδ) · (kδ-k'δ)|^2 dx dy + C_η∑_k ∈ Z_δ∑_|j|< (1+η)/δ1/^d+2δ^2dδ^2 ≤ (1+η)1/^d+2∑_k,k' ∈ Z_δ δ|k-k'| < (1+η) ∫_(Q_δ^k × Q_δ^k') ∩ (δ E)^2 |∇ u (kδ) · (kδ-k'δ)|^2 dx dy +C_η1/δ^d(/δ)^d 1/^d+2δ^2dδ^2 ≤ (1+η) 1/^d+2∑_k,k' ∈ Z_δ δ|k-k'| < (1+η) |(Q_δ^k × Q_δ^k') ∩ (δ E)^2| |∇ u (kδ) · (kδ-k'δ)|^2 +C_η(δ/)^2. Finally, by the periodicity of the set E |(Q_δ^k × Q_δ^k') ∩ (δ E)^2| = |Q_δ^k ∩δ E||Q_δ^k'∩δ E| = δ^2d |(k+Q) ∩ E| |(k'+Q) ∩ E| = δ^2d |Q∩ E|^2= δ^2d |K|^2 , which implies 1/^d+2∑_k,k' ∈ Z_δ δ|k-k'| < (1+η) |(Q_δ^k × Q_δ^k') ∩ (δ E)^2| |∇ u (kδ) · (kδ-k'δ)|^2 = 1/^d+2∑_k,k' ∈ Z_δ δ|k-k'| < (1+η) |K|^2 δ^2d |∇ u (kδ) · (kδ-k'δ)|^2 ≤ |K|^2 ∑_k ∈ Z_δδ ^ d ∑_|j|< (1+η)/δ(δ/)^d|∇ u (kδ)·δ/j|^ 2. It follows F_(u) ≤ (1+η)^2|K|^2 ∑_k ∈ Z_δδ ^ d ∑_δ/|j|< 1+η(δ/)^d|∇ u (kδ)·δ/j|^ 2 + C_η((δ/)^2+ ^2), and, thus, lim sup_→ 0 F_ (u) ≤ (1+η)^2|K|^2∫_Ω' × B_1+2η |∇ u (x) ·ξ|^2 dx dξ. Sending η→ 0 and Ω ' ↓Ω we conclude lim sup_→ 0 F_ (u) ≤ |K|^2∫_Ω× B_1|∇ u (x) ·ξ|^2 dx dξ = |K|^2C(d)∫_Ω |∇ u (x)|^2 dx, with C_d = 1/d∫_B_1|ξ|^2 dξ. §.§ Γ-convergence with general convolution kernel ϕ_ We recall that the general form of the functional we consider is F_(u)=1/^d+2∫_(Ω∩δ E) × (Ω∩δ E)ϕ(x-y/)(u(x)-u(y))^2dxdy and our preliminary case considered is ϕ(ξ)=χ_(0,1)(|ξ|). In the case of a general ϕ we can proceed by approximation. As stated at the beginning of Section <ref>, we consider a radial convolution kernel in ℝ^d; that is, a function φ:ℝ^d→ [0,+∞) such that a decreasing function ϕ_0:[0,+∞)→ [0,+∞) exists satisfying φ(ξ)=ϕ_0(|ξ|). We further assume that ∫_ℝ^dφ(ξ)(1+|ξ|^2)dx<+∞, and for each >0 we define the scaled kernel φ_ by φ_(ξ)= 1^dφ(ξ). Since ϕ_0 is decreasing, we can write a `staircase' approximation made of characteristic functions of nested intervals (0,r_k) (where r_k are defined below), that is ϕ_0,n(t)=∑_k=0^n2^nχ_(0,r_k)(t) where for each k=0,…, n2^n r_k= sup{ t : ϕ_0(t) ≥ (k+1)2^-n} and r_k=0 if the set of such t is empty. With this choice, r_j is decreasing by the monotonicity of ϕ_0. Accordingly, for each n ∈ℕ define the simple function ϕ_n(ξ)= ∑_k=0^n2^nχ_(0,r_k)(|ξ|). Note that ϕ_n(ξ) ≤ϕ(ξ) for every ξ∈^d. By standard approximation through simple functions, ϕ_n(ξ) →ϕ(ξ) pointwise almost everywhere, in L^1(^d) by dominated convergence and, since ϕ is in particular bounded, it converges also uniformly. We define as F^n_ the same functional (<ref>) with ϕ_n in place of ϕ. Since ϕ is an upper bound for any ϕ_n, we have for any u_⇀ u and n ∈ℕ fixed lim inf_→ 0 F_(u_) ≥lim inf_→ 0 F^n_(u_) = lim inf_→ 01/^d+2∫_(Ω∩δ E)^2 ϕ_n(x-y/)(u_(x)-u_(y))^2dxdy =lim inf_→ 0( ∑_k=0^n2^n1/^d+2∫_ (Ω∩δ E) ^2 {|x-y|<r_k }(u_(x)-u_(y))^2dxdy ) ≥∑_k=0^n2^nlim inf_→ 01/^d+2∫_ (Ω∩δ E)^2 {|x-y|<r_k }(u_(x)-u_(y))^2dxdy ≥∑_k=0^n2^n |K|^2 ∫_Ω∫_B_r_k |∇ u(x) ·ξ|^2 d ξ dx= =|K|^2 ∫_Ω∫_^dϕ_n(ξ) |∇ u(x) ·ξ|^2d ξ dx, by the result of Section <ref>. We can pass to the limit for n →∞ by monotone convergence and we get lim inf_→ 0 F_(u_) ≥ |K|^2 ∫_Ω∫_^dϕ(ξ) |∇ u(x) ·ξ|^2d ξ dx= C_∞∫_Ω |∇ u(x)|^2dx, with C_∞= |K|^2 1/d∫_^dϕ(ξ) |ξ|^2 d ξ. Hence, the lower bound is satisfied. As for the upper bound, we claim that it is enough to consider the function ϕ with compact support. Indeed, for any R>0 we can split the functional F_(u) = 1/^d+2∫_(Ω∩δ E)^2ϕ(x-y/)(u(x)-u(y))^2dxdy = 1/^d+2∫_(Ω∩δ E)^2ϕ(x-y/) χ_B_R(x-y/)(u(x)-u(y))^2dxdy + 1/^d+2∫_(Ω∩δ E)^2ϕ(x-y/) χ_^d ∖ B_R(x-y/)(u(x)-u(y))^2dxdy. In the second term of the sum we can perform the change of variables y=x+ξ, so that, setting Ω(ξ)= { x ∈Ω∩δ E: x+ξ∈Ω∩δ E }⊂Ω for each ξ∈^d, it holds -2cm1/^d+2∫_(Ω∩δ E)^2ϕ(x-y/) χ_^d ∖ B_R(x-y/)(u(x)-u(y))^2dxdy = ∫_^d ∖ B_Rϕ(ξ) ∫_Ω(ξ)( u(x+ ξ)-u(x)/)^2 dx dξ ≤ ∫_^d ∖ B_Rϕ(ξ) ∫_Ω(∇ u _L^∞(Ω) |ξ| )^2 dx dξ = |Ω| ∇ u _L^∞(Ω)∫_^d ∖ B_Rϕ(ξ)|ξ|^2 dξ=o(1) as R →∞, uniformly in . Thus we are left with the first term where ϕχ_B_R has compact support, so that the claim holds. For a general ϕ_0 bounded, decreasing and with compact support, we can approximate it from above in the same way of (<ref>), since for ϕ_0,n it holds ϕ_0,n≤ϕ_0 ≤ϕ_0,n +2^-n for each t ∈, so it it enough to take ϕ̃_0,n(t)= ϕ_0,n(t)+2^-n for t ∈supp(ϕ_0)=(0,R), and 0 otherwise. Accordingly for each n ∈ℕ define ϕ̃_n (ξ)= ϕ̃_0,n(|ξ|)=∑_k=0^n2^nχ_(0,r_k)(|ξ|)+ 2^-nχ_(0,R)(|ξ|)= ∑_k ∈ℐ_nχ_(0,r_k)(|ξ|) (ℐ_n is just a finite set of indices for each n) for which ϕ̃_n ≥ϕ and it converges to ϕ uniformly and in L^1(^d). Finally, for u ∈ C^2(Ω) and n ∈ℕ fixed, by the result of Section <ref> lim sup_→ 0 F_(u) ≤lim sup_→ 01/^d+2∫_(Ω∩δ E)^2 ϕ̃_n(x-y/)(u(x)-u(y))^2dxdy =lim sup_→ 0( ∑_k ∈ℐ_n1/^d+2∫_ (Ω∩δ E)^2 {|x-y|<r_k }(u(x)-u(y))^2dxdy ) ≤∑_k ∈ℐ_nlim_→ 01/^d+2∫_ (Ω∩δ E)^2 {|x-y|<r_k }(u(x)-u(y))^2dxdy = ∑_k ∈ℐ_n |K|^2 ∫_Ω∫_B_r_k |∇ u(x) ·ξ|^2 dξ dx= =|K|^2 ∫_Ω∫_^dϕ̃_n(ξ) |∇ u(x) ·ξ|^2d ξ dx , and finally pass to the limit as n →∞ to get the upper bound lim sup_→ 0 F_(u) ≤ |K|^2 ∫_Ω∫_^dϕ(ξ) |∇ u(x) ·ξ|^2d ξ dx=C_∞∫_Ω |∇ u(x)|^2 dx. Acknowledgements. The research reported in the present contribution was carried out as part of the project “Variational methods for stationary and evolution problems with singularities and interfaces" PRIN 2022J4FYNJ. The authors are members of GNAMPA of INdAM. abbrv
http://arxiv.org/abs/2405.09768v1
20240516021841
Evaluating Text-to-Speech Synthesis from a Large Discrete Token-based Speech Language Model
[ "Siyang Wang", "Éva Székely" ]
eess.AS
[ "eess.AS", "cs.SD" ]
§ INTRODUCTION Generative speech language modeling through next-token prediction on discrete speech tokens <cit.>, referred to as speech language models (SLM) [Some literature also refers to continuous speech representation models such as WavLM <cit.> speech language models. In this paper, we use the term SLM to refer to discrete token-based generative speech language models exclusively.], was initially proposed as a general speech processing paradigm without the need for text transcription. This was soon adapted to achieve text-to-speech synthesis <cit.>. This paradigm is highly scalable, similar to text-based language models. Several proposed SLMs are trained on speech datasets significantly larger than conventional TTS corpora. It has been shown that SLMs output natural and varied speech samples <cit.>. Recent research suggests that with sufficient training data and model parameter scaling, SLM-based TTS can exhibit what is termed an “emergent ability" to render appropriate prosody for complex text inputs <cit.>. Furthermore, SLMs have showcased in-context learning capability similar to their textual counterparts. In particular, several SLMs boast so-called “zero-shot TTS" ability, whereby only a few seconds of a speech clip is enough to condition the model to mimic the speaker unseen during training with a reported high level of similarity <cit.>. SLMs have also demonstrated strong multi-tasking capabilities to solve synthesis-adjacent tasks, such as speech editing <cit.>, speech-to-speech translation <cit.>, combining synthesis and recognition into a single model <cit.>. However, several important aspects of SLM-based TTS are not thoroughly evaluated in the literature. Questions arise, such as: How meaningful is the variation of the generated output? How do these models handle text inputs with different speaking styles? How do they measure up against traditional TTS systems? Recognizing the growing prominence of this methodology in speech synthesis, we believe that a comprehensive evaluation of a current model is essential, with the objective of shedding light on both its strengths and limitations. This study evaluates five key dimensions: speaking style, intelligibility, speaker consistency, prosodic variation and spontaneous behaviour. We also publicly release evaluated audio samples and evaluation code here[Paper resource page: <https://swatsw.github.io/lrec24_eval_slm/>]. Our findings indicate that the evaluated SLM generates highly diverse and natural output in terms of prosody and spontaneous behavior. It performs well in both read-speech and conversational speaking styles. However, we find that the model's primary limitations lie in its robustness, as evident in its low intelligibility and speaker consistency. § EVALUATED MODEL: BARK Several SLMs capable of TTS have been proposed. Our work evaluates Bark <cit.> [Official repository: https://github.com/suno-ai/bark. Repository commit used in this paper: 599fed0. Model weights retrieved from https://huggingface.co/suno/bark, version number: a3f055a.]. There are three reasons for making this choice. First and foremost, both code and weights of Bark are open-sourced. The alternatives are, to the best of our knowledge, not publicly available through code repository or inference API. Secondly, Bark is representative of the current state-of-the-art discrete-token SLM approach to synthesis in terms of model/data scale, model architecture, and training methodology <cit.>. This increases the likelihood that our findings may apply to other models not specifically evaluated in this study. Lastly, we deduce from its output that Bark is trained on a mixed-style dataset, while prior SLMs are trained on either read/audiobook-only corpora <cit.>, or spontaneous/conversational-only corpora <cit.>. This allows us to assess Bark’s performance across different speaking styles and investigate the impact of employing a mixed-style dataset for SLM training. Even though Bark is open-sourced, no official technical document was released. To bridge this gap of knowledge, we derive the details of the model presented in this section directly from the model's official codebase. Should any discrepancies arise between our description and the actual implementation, the latter should be considered authoritative. §.§ Model Architecture Bark consists of 3 levels of discrete-token speech language model as illustrated in Figure <ref>. The three levels operate in succession. The first level, called text-to-semantic, is a decoder-only auto-regressive transformer that takes in text tokens, in this case text language model tokens, additionally encodes a given speaker prompt in the form of semantic tokens as well as current generated sequence of semantic tokens, and outputs next semantic-token distribution. The semantic token vocabulary has a size of 10,000 and is believed to be similar to semantic tokens used in AudioLM [Suno-AI acknowledged AudioLM as reference for building Bark <cit.>.] <cit.> and SPEAR-TTS <cit.>. Semantic tokens serve as a middle representation between text and more granular audio tokens. The second level, semantic-to-coarse, is another decoder-only auto-regressive transformer. It takes generated semantic token sequence from the first level, additionally encodes a speaker prompt in the form of combined semantic and audio tokens along with current generated sequence of audio tokens, and outputs distribution for the next audio token. The audio tokens are from an audio compression model EnCodec <cit.>, which compresses speech audio into 8 separate code books each of size 1024 at 75Hz. Semantic-to-coarse transformer generates tokens for the first 2 code books or the coarse tokens. The third level, coarse-to-fine, takes in generated coarse tokens from the second level, and generates tokens for the other 6 code books of the EnCodec model or the fine tokens. The generation is conditioned on a speaker prompt in the form of fine tokens. This level is different from the first two in that it is an encoder-only transformer, i.e. it generates in parallel and not auto-regressively. After running the third level, the resulting sequence for all 8 code books of the EnCodec model is fed into the corresponding EnCodec decoder to obtain the final waveform. In order to generate speech from a given speaker, Bark is prompted at all three levels with tokens from the given speaker. This is a standard technique in SLMs <cit.>, often referred to as “zero-shot" synthesis since only a short clip from a speaker is needed to condition the synthesis on that speaker. However, it is not clear if the prompt tokens provide additional context besides speaker identity. For example, if the prompt tokens are from the prior utterance, does the model take into account the prosody or semantics in the prompt to generate more meaningful speech? We test this hypothesis by replacing the speaker prompt at semantic-to-coarse level with encoded semantic tokens from prior utterance. This model condition is tested in the listening tests in Section <ref>. §.§ Baseline: VITS We use a multi-speaker VITS <cit.> [Implementation: https://github.com/coqui-ai/TTS] trained on VCTK dataset veaux2017cstr as baseline. Our justification for this choice is two-fold. Firstly, this model is representative of current state-of-the-art TTS models in several regards: architecture, input representation (phonemes), and training corpus. The model is designed to be probabilistic and is therefore able to address the one-to-many problem in TTS, which is an important aspect of this evaluation. Secondly, VITS models phoneme duration explicitly, achieving a level of intelligibility that can be seen as a top line for TTS models. Thus, we can effectively assess Bark's intelligibility through comparison with VITS. We note that VITS differs from Bark in several dimensions, including model architecture, data representation, and training data. Nonetheless, we designed extensive evaluations that are fair from the black-box usability point of view. §.§ Parameter Count and Runtime To clearly see the scale comparison between the models, we present model size and synthesis real-time-factor (RTF) in Table <ref>. The Bark models have a much bigger parameter count than VITS. Even the smaller Bark-small has 10 times the parameter count of VITS. This is partially due to the ultra-large vocabulary size used by Bark as it encodes text using a variation of word-piece used in large text language models, while a conventional phoneme-based TTS like VITS only has a small phoneme vocabulary. On the runtime side, Bark models are much slower to run compared to VITS. The main contributing factor is Bark's autoregressive nature. VITS generates in parallel by a fully-convolutional network. Furthermore, the much higher parameter count in Bark could also have contributed to slower synthesis speed. § EVALUATION METHOD In this section, we present our methodology for evaluating Bark on the following dimensions: speaking style, intelligibility, speaker consistency, prosody variation, spontaneous behaviour, subjective listening impression. We elaborate on the rationale and specific methods employed for each of these dimensions. Speaking style is interwoven with all other evaluated dimensions as we utilize two types of text inputs, corresponding to two speaking styles: reading and conversational, in all evaluations. In all evaluations besides the listening tests, we use the set of 10 speaker prompts corresponding to 10 different speakers provided in the Bark repository to guide model synthesis, which we refer to as Bark speaker 0 to 9. To reduce the potential impact of speaker differences on our evaluation, we match each Bark speaker with a similar speaker from the VITS model’s VCTK training dataset. This matching is based on the speaker similarity metric, as detailed in the speaker consistency evaluation in section <ref>. This results in 10 Bark speakers and their corresponding 10 VITS speakers for all evaluations, except for listening tests for which the speaker selection process is described in section <ref>. §.§ Text Input: Two Speaking Styles To investigate model performance across different speaking styles, we utilize two distinct sets of text inputs corresponding to two speaking styles: read-speech text from LibriTTS zen2019libritts and conversational text from DailyDialog li2017dailydialog. §.§.§ Read-Speech Text: LibriTTS To evaluate read speech synthesis, we use text from the LibriTTS zen2019libritts corpus as input text prompts to the TTS models. This corpus consists of open-access books read aloud by multiple speakers. Utterance ordering in the original book is carefully recorded in the metadata, thus it is possible to retrieve the preceding utterance. We utilize this aspect of the corpus to evaluate synthesized speech in the context of its preceding utterance. We randomly sampled 2,400 utterance texts from the official test split for all experiments. §.§.§ Conversational Text: DailyDialog To evaluate conversational synthesis, we use the DailyDialog li2017dailydialog corpus as text input. It consists of written single-themed conversations between two interlocutors without clearly defined roles. We use 2,400 randomly chosen utterances from the official validation split of the corpus for all experiments. §.§ Intelligibility We apply Automatic Speech Recognition (ASR) to synthesized samples. The word error rate (WER) between ASR output and TTS input is used as a proxy for intelligibility of synthesized speech as prior study has found that ASR accuracy is correlated with human transcription accuracy <cit.> in evaluating TTS. The ASR used is Whisper <cit.> base model specifically trained for English. §.§ Speaker Consistency We subjectively found that the Bark output has minor speaker drift such that several sampled outputs from the same speaker sound like different speakers. To probe how consistent is Bark in this regard, we use speaker similarity score from a strong speaker identification model ECAPA-TDNN <cit.>[Implementation: https://speechbrain.github.io/], calculated as cosine-distance of its embedding space, as speaker consistency metric for synthesized speech samples. Taking synthesized samples from the same speaker, we calculate speaker similarity scores for all enumerated sample pairs, we then calculate mean and standard deviation of the scores. These two statistics are used as the metrics for speaker consistency within the same speaker. A high mean and low standard deviation indicates high speaker consistency, and vice versa for low speaker consistency. We also calculate inter-speaker similarity as we hypothesize that as the model drifts away from the given speaker it drifts towards another speaker in its distribution. Thus, we expect to see that the model with low within-speaker speaker consistency to have higher inter-speaker similarity. §.§ Prosodic Variation One of the predominant challenges in text-to-speech is known as the 'one-to-many problem'. The same input text often has multiple reasonable realizations in the speech domain. Consequently, a good TTS model should be able generate range of plausible prosodic interpretations given the same input text. We will assess the how plausibly the prosody is in <ref> Listening Test. This part of the evaluation is focused on how varied is the prosody when the system is presented with identical sentence inputs. Here we use a quantitative evaluation method, based on automatically extracted prosodic features. The suitability of the generated prosody will be evaluated in the listening test. The prosody dimensions measured are fundamental frequency f0 and speech rate. f0 is measured using YIN algorithm <cit.> [Implementation: https://github.com/brentspell/torch-yin/tree/main]. Speech rate is measured as syllables per second. We found that ASR transcription is more accurate in counting syllables than text input, since there could be significant discrepancy between synthesized speech and input text especially in Bark models. We thus opted to use ASR transcription to count syllables when measuring speech rate. After measuring the two prosody metrics on each speech sample, we calculate their standard deviations by utterance text. Each utterance text is synthesized with different speakers and has several sampled synthesis for the same speaker. We aggregate all speakers and all sampled synthesis per speaker for the same utterance test into a list, and calculated the standard deviation of this list. We would then summarize standard deviations for all utterances by model to understand between-model difference in the amount of prosody variation. Through informal subjective listening tests, we found that Bark samples are more varied for the same text input. We therefore hypothesize that it would have high standard deviation in both prosody measures for most samples. §.§ Spontaneous Behaviour We measure two spontaneous behaviours present in the synthesis output: the insertion of fillers and pauses. Both can be measured automatically. We count the number of fillers through ASR output. Here we define fillers as speech tokens that do not add to the propositional content of the message and use the following set: “um", “uh", “ah", “mm", “hm", “hmm", “huh", “er", “eh", “mhm", “mmh". The first two are most common. We tuned the optional prompt input in Whisper to elicit better filler transcription on a small test set. It still misses some filler occurrences, so this count can be seen as under-measured but the under-measurement should equally affect all evaluated models. The pauses are measured as non-speech segments within the speech utterance. We first run the speech samples through a Voice Activity Detector [https://github.com/snakers4/silero-vad] to get speech segments. The spaces between them are non-speech segments and are treated as pauses, the total length of which is the pause length of the speech segment. §.§ Listening Tests We conduct listening tests to assess two aspects of synthesized speech with human listeners: overall naturalness and contextual suitability. For each test we conduct two separate parts in the two speaking styles. We assess 3 samples of each input sentence from each evaluated model. All tests follow the MOS listening test specification in ITU standard P.800 <cit.>. The main deviation from the standard is the instruction texts given to listeners, which are specific to the two tests. We use only Bark-base in these tests as comparing scales is not the main focus of these tests. §.§.§ Test 1: MOS-N(aturalness) The MOS-N test is similar to how standard speech naturalness assessment is done in TTS model development. We ask the listeners to rate "How natural does the speech sample sound?". It is a single combined measure of different aspects of the speech audio, including signal quality, noise level, and prosody naturalness. We use this test to benchmark Bark's synthesis output in the same way as standard TTS research practice. §.§.§ Test 2: MOS-Contexual-S(uitability) As mentioned in Section <ref>, we also want to assess how Bark's high variation in prosody and spontaneous behavior are perceived by the human listener. We hypothesize that such high variation makes Bark's synthesis output more suitable for a variety of specific contexts. This contrasts with conventional TTS systems which may come across as monotonous and less tailored for distinct scenarios due to their limited prosodic variance. Consider a brief dialogue: "A: How are you? B: I'm fine.". The response "I'm fine." can be articulated in myriad ways to imply different sentiments. Each rendition might emphasize unique prosodic elements or spontaneous nuances to relay the intended meaning appropriate to the context. Prior studies have validated that such contextual suitability can be abundantly detected in listening tests similar to ours <cit.>. We hypothesize that Bark can synthesize such variations, which would often align with human expectations in terms of contextual appropriateness or suitability. Conversely, VITS, with its constrained prosodic range, might only match a singular context or occasionally might not align with any. Similarly to the MOS-N test, we conduct separate parts of MOS-Context-S test in the two speaking styles: read-speech and conversational. For the read-speech evaluation, we leverage the LibriTTS corpus zen2019libritts, where all sentences orginate from books. Consequently, the immediate preceding sentence serves as context to a speech sample. In the conversational test, we use the preceding dialogic exchange (i.e. the turn of the interlocutor) in DailyDialog li2017dailydialog as context. Both the context and the input are shown to the listeners. We ask the listeners to rate "How suitable does the speech sample sound in the context?". §.§.§ Speaker Selection for Listening Tests We selected two Bark speakers for listening tests: speaker 6, which exhibits a spontaneous conversational style with a male voice; and speaker 9 which is less spontaneous as shown in Table <ref>, but has varied prosody, and sounds like a female voice. For each of the two speaking styles, we randomly sampled a VITS speaker as baseline comparison. As mentioned at the end of Section <ref>, we test an additional model variation of Bark where the speaker prompt is replaced by a context prompt from the prior utterance. This variation is applied to Bark-9 for LibriTTS and Bark-6 for DailyDialog. Thus, we have the following set of model-speakers for LibriTTS: Bark-6, Bark-9, Bark-context-9 and VITS-p243; and the following set for DailyDialog: Bark-6, Bark-9, Bark-context-6, and VITS-p247. In order to neutralize any perceptual biases stemming from voice timbre variations across the speakers, we apply voice conversion to all test speech samples using a third VITS speaker. This speaker is chosen based on equidistant speaker embedding metric[Calculated in the same way as Section <ref>.] to the three evaluated speakers, ensuring a roughly equivalent challenge in converting each of the three primary speakers to the target VITS speaker. We use a state-of-the-art voice conversion model FreeVC <cit.> [Implementation: https://github.com/coqui-ai/TTS]. § RESULTS §.§ Intelligibility For automatically assessing intelligibility, we employ the ASR word error rate (WER) from Whisper as an indicative measure, as detailed in Section <ref>. Both the ground-truth TTS input and the Whisper transcriptions are normalized by removing punctuations and converting to lowercase. The results are shown in Figure <ref>. Two prominent observations emerge from this analysis: 1) Bark consistently registers a higher WER in comparison to VITS across both corpora. The Bark speaker with the lowest WER still surpasses the highest WER recorded by a VITS speaker. 2) When comparing two Bark models at different scales the larger model consistently outperforms its smaller counterpart in terms of average intelligibility. This observation holds true for 9 out of 10 speakers in LibriTTS and 7 out of 10 in DailyDialog. §.§ Speaker Consistency Speaker similarity matrices are shown in Figure <ref>. Ideally, the diagonal cells, which denote intra-speaker similarity, should exhibit high values. This would indicate that when models are conditioned on a specific speaker, the resulting samples consistently align with that speaker's profile. While it's evident that diagonal cells generally manifest heightened speaker similarity compared to the off-diagonal cells (representing inter-speaker similarity), this trend is much more pronounced in VITS. For several Bark speakers, there's a concerning observation: the off-diagonal cells showcase intensities nearly matching the diagonal cells, especially in speaker 2 and 7. This implies that the samples synthesized from a specific Bark speaker bear a similarity to samples from other speakers almost as much as they do to their own. Simply put, Bark appears to struggle in maintaining robust speaker conditioning. The synthesized output might not always faithfully replicate the conditioned speaker's attributes. Furthermore, when Bark diverges from the conditioned speaker, it tends to gravitate towards other speakers within its training set. This is evidenced by the heightened similarity values in the off-diagonal cells, which correspond to other modeled speakers. Bark-base has slightly more robust intra-speaker consistency as evident in higher values in diagonal cells in 7 out of 10 speakers, otherwise the two sizes of Bark models bear little difference. §.§ Prosodic Variation As detailed in Section <ref>, we measured two prosodic dimensions, f0 and speech rate. We then calculated standard deviation of the measured values by utterance. The results are visualized in Figure <ref>. Our hypothesis is that Bark generates more varied prosody than VITS. This is indeed the case as evident in Bark's higher standard deviation in both measured prosody metrics compared to VITS. The results also show that the spread of standard deviation values is more expansive for Bark. This indicates not only a generally more varied prosody, but also a more fluctuating degree of prosodic variation across different utterances when compared to VITS. The distributions of small and base Bark models are similar suggesting that scale does not affect level of prosodic variation in SLM. §.§ Spontaneous Behaviour We measured spontaneous behaviours as detailed in Section <ref>, insertion of fillers and pauses. Results in DailyDialog are shown in Table <ref>, results are similar in LibriTTS. For Bark speakers, we see that speaker 6 inserts most fillers per utterance at around 0.55 on average, while other speakers like 0 and 9 do not insert fillers at all. None of the VITS speakers insert fillers as expected, since the model's explicit duration modeling strongly discourages it from deviating from the input text. Additionally, Bark tends to insert longer pauses on average compared to VITS. Both pause and filler insertion among Bark speakers also display considerable variance, with standard deviation as high as 2.3 for filler count and 1.1 for pause duration in speaker 6 and 3, while VITS only reaches 0.1 and 0.5 in corresponding measures. Bark results in Table <ref> are from Bark-base. Bark-small obtains similar results. §.§ Listening Tests We conducted listening tests detailed in Section <ref>. Initial observations from a pilot study revealed that Bark output with high WER are easy to detect by listeners and are rated the lowest, mostly because they contain clearly audible synthesis mistakes like non-speech noise or strange voicing. Our intelligibility results in Section <ref> already revealed that Bark is not robust in this regard. Therefore, including samples with compromised intelligibility in the listening test could introduce a confounding variable, potentially obfuscating genuine insights about the TTS. To mitigate this, we excluded samples with a WER exceeding 0.1, applying this criterion uniformly to both Bark and VITS outputs. After this filtering, we retained 33 utterances for LibriTTS and 39 from DailyDialog [Audio samples: <https://swatsw.github.io/lrec24_eval_slm/>]. Each utterance has 3 intelligible samples from each model. We recruited 30 native English speakers for each of the 4 tests (2 parts in both MOS-N and MOS-Context-S) on crowd-sourcing platform Prolific [prolific.com]. Each participant was tasked with rating between 50 to 60 samples, ensuring a balanced representation across utterances and model-speakers. Each sample received 5 ratings on average, the mean of the ratings is used as the MOS for that sample. For each utterance, we calculate two metrics, the mean MOS of the 3 samples, and the range (max - min) of the 3. The results are shown in Table <ref>. In mean MOS, we find Bark speaker 9 to be strong in all test scenarios while VITS speakers are rated the lowest or the second lowest. This suggests that, in both read-speech and conversational styles, Bark synthesis is more natural and more contextually appropriate to VITS. Bark-context, the Bark variation with prior utterance as prompt, has the highest rating in LibriTTS but not in DailyDialog. Therefore it is not clear whether providing prior utterance as context through prompt tokens improves synthesis. This puts into question if speech language model TTS incorporates prosodic or semantics context in the prompt. We apply Tukey HSD test (with FWER=0.05) on all pair-wise model-speaker comparisons in all 4 tests and find that Bark-context-9 is significantly better than VITS-p243 in both MOS-Naturalness and MOS-Context-Suitability in LIibriTTS, however no significance is found in DailyDialog. This suggests that Bark's advantage over VITS is stronger in read-speech synthesis than in conversational synthesis. The MOS range reveals how varied the ratings are for the 3 samples of each utterance. As results in prosodic variation (Section <ref>) have shown, Bark produces more varied samples with the same input text compared to VITS. This is again evident in the MOS range statistics where Bark models have clearly higher MOS range across the test sets. We further validate this by comparing MOS variances with Fligner-Killeen test which shows that the Bark speaker with the highest variance in each test is significantly more varied than VITS. § DISCUSSION Admittedly, our evaluation was centered on a single SLM, primarily due to the limited public access of similar models. This focus may limit the generalizability of our findings across other SLMs. When comparing our results with existing literature, it becomes apparent that there are significant differences in performance between models. For instance, in terms of intelligibility, BASE-TTS <cit.> reported a 6.5 WER, while Bark obtained 19.2 on the same set of text inputs. Similarly, VoxtLM <cit.> reported a modest 8.8 Character Error Rate (CER). Thus, both BASE-TTS and VoxtLM are significantly more robust SLMs in terms of TTS intelligibility, highlighting substantial variability between SLM models. Another example of such between-model difference is how model performance changes with increased scale. While we found that larger Bark models are more robust in terms of intelligibility and speaker consistency, VoxtLM reported that increased model size did not result in increased synthesis robustness, as their 1.3B model obtained worse CER than their 350M model. They found that instead the speech token vocabulary size is more important for high TTS intelligibility as increased token vocabulary increased TTS intelligibility. Nonetheless, our findings resonate with patterns observed in the current literature. For instance, BASE-TTS <cit.> demonstrated that listener preferences tend to favor larger models trained on more data. We observed a similar trend in Bark, where both intelligibility and speaker consistency improved as the model scale increased. We hope that future research will leverage our evaluation framework to explore additional emerging SLMs to gain a deeper understanding of the dynamics between model scale, training data volume, and overall TTS performance in SLMs. A distinguishing characteristic of Bark is its capability for multi-lingual synthesis. Besides English, Bark provides a range of other languages. Enabling multi-lingual synthesis in a single model is a powerful feature not typically seen in traditional TTS models. This achievement has also been mirrored by several other SLMs <cit.>. Given the observed speaker inconsistencies within Bark, it is plausible that languages might influence one another within the model, leading to intriguing interactions that warrant investigation in the future. § CONCLUSIONS We evaluated TTS from a large discrete token-based speech language model (SLM), Bark, along the dimensions of speaking style, intelligibility, speaker consistency, prosody variation, spontaneous behaviour. Through a series of carefully designed automatic quantitative measurements and subjective listening tests, we find that Bark generates highly variable and natural prosody as well as spontaneous behaviours. However, it falls short in robustness when compared with a conventional TTS model, particularly in terms of intelligibility and speaker consistency. Interestingly, we observed that augmenting the model's scale marginally enhances its robustness, suggesting that scaling might be a promising avenue to increase the robustness of SLMs. We believe that our findings can serve as a benchmark for future progress in the development of generative SLMs for synthesis. § ACKNOWLEDGEMENTS This work is supported by Digital Futures project Advanced Adaptive Intelligent Systems (AAIS), and the Swedish Research Council project Perception of speaker stance (VR-2020-02396). * § BIBLIOGRAPHICAL REFERENCES lrec-coling2024-natbib § LANGUAGE RESOURCE REFERENCES lrec-coling2024-natbib languageresource
http://arxiv.org/abs/2405.08647v1
20240514142214
Output-decomposed Learning of Mealy Machines
[ "Rick Koenders", "Joshua Moerman" ]
cs.LO
[ "cs.LO", "cs.LG" ]
D-CAST: Distributed Consensus Switch in Wireless Trustworthy Autonomous System Dachao Yu School of Information and Control Engineering, Qingdao University of Technology dachaoyu1994@gmail.com Jiayuan Ma College of Electronic and Information Engineering Tongji University AndyMa5@outlook.com Hao Xu College of Electronic and Information Engineering Tongji University Hxu@tongji.edu.cn Received XXX; accepted YYY ========================================================================================================================================================================================================================================================================================================================================== We present an active automata learning algorithm which learns a decomposition of a finite state machine, based on projecting onto individual outputs. This is dual to a recent compositional learning algorithm by <cit.>. When projecting the outputs to a smaller set, the model itself is reduced in size. By having several such projections, we do not lose any information and the full system can be reconstructed. Depending on the structure of the system this reduces the number of queries drastically, as shown by a preliminary evaluation of the algorithm. Active Automata Learning, Model Learning, Compositionality, Finite State Machines § INTRODUCTION Model learning is an automated technique to construct a finite state machine for a closed-box system. Using only observations based on input and output behaviour, model learning algorithms are successfully applied to systems, e.g., in order to find security flaws in DTLS <cit.>, or to understand legacy systems at ASML <cit.>. Many more applications of model learning are listed by <cit.>. In these applications, systems are assumed to behave like finite state machines, and algorithms such as and are very capable of inferring the state machine from observed inputs and outputs. However, often times, systems are not just a finite state machine; they are engineered in a structured way, by re-using common components or composing separate modules into a bigger system. This additional structure is not used in these learning algorithms. Only recently new learning algorithms are described which incorporate some sort of compositionality into model learning, see for instance the papers by <cit.>; and <cit.>. In this paper we take a dual approach to that of <cit.>. Instead of decomposing the set of inputs into independent smaller sets, we decompose the output. This is similar to the approach of <cit.>, but with the advantage that our new approach does not assume a priori knowledge of the decomposition and is always applicable. The main idea is simple: Take a Mealy machine with inputs from a set X and outputs in a set Y (fig:main-idea-a). Then for each output y ∈ Y, we can imagine a separate output wire which indicates whether the current output equals y or not (fig:main-idea-b). These output wires have the property that exactly one output wire is active. Observing each wire individually, we may learn a model of the state machine associated to that output only. We call such a machine where we focus on only one output a projection. These models may be smaller than the whole system (fig:main-idea-c). An example of decomposing the system this way can be seen in fig:example. The automaton in fig:automaton has a cyclic output for the input a, and the input b reverses the cycle. This automaton can be decomposed into three smaller automata, one for each of the three outputs. The three projections all have 3 states, while the original automaton has 6 states. The projection on the output x can be seen in fig:projection. The other two projections (onto y and z) have a similar, but different shape. One might worry that learning three separate automata with 3 states would require more work, compared to learning a single 6-state automaton. Luckily, the three automata are not independent and the learning algorithm can re-use certain observations. Our learning algorithm which learns the three 3-state automata requires fewer queries than learning the 6-state automaton with . [node distance = 2.6 cm, on grid, auto] (q0) [state, initial, initial text = ] q_0; (q1) [state, below of=q0] q_1; (q2) [state, below of=q1] q_2; (q3) [state, right of=q0] q_3; (q4) [state, below of=q3] q_4; (q5) [state, below of=q4] q_5; [tr] (q0) edge node a/y (q1) (q1) edge node a/z (q2) (q2) edge [bend left, left] node a/x (q0) (q5) edge node a/y (q4) (q4) edge node a/x (q3) (q3) edge [bend left, right] node a/z (q5) (q0) edge [bend left] node b/x (q3) (q3) edge [bend left] node b/x (q0) (q1) edge [bend left] node b/y (q4) (q4) edge [bend left] node b/y (q1) (q2) edge [bend left] node b/z (q5) (q5) edge [bend left] node b/z (q2); In practice, it varies how many states the projections have compared to the original automaton. In the worst case, one of the projections requires the same number of states as the original automaton. In this case, the benefits of our approach do not apply, and the algorithm may perform worse than . For automata where the projections are smaller than the full automaton, though, our approach can be much better than . Our algorithm is based on the algorithm by <cit.> and is still work-in-progress. We expect that the improvements we see are transferable to more efficient algorithms such as by <cit.>. § PRELIMINARIES We recall the common definitions and notations on sets and words (or sequences). The set of all words with symbols in A is denoted by A^*, which includes the empty word ∈ A^*. The set of non-empty words is A^+. The last symbol of a word w ∈ A^+ is denoted by (w) ∈ A. Concatenation of two words u, v is written as u · v or as uv. This generalises to sets of words: U · V = {uv | u ∈ U, v ∈ V} for U, V ⊆ A^*. Given a function f A → B, we get an induced function f^* A^* → B^*, where f acts on each symbol. §.§ Machines We focus on deterministic finite state machines. A Mealy machine is a tuple M = (Q, X, Y, q_0, δ, λ) where Q is a finite set of states; X is a finite set of inputs; Y is a finite set of outputs; q_0 ∈ Q is the initial state; δ Q × X → Q is the transition function; and λ Q × X → Y is the output function. In figures we depict transitions as q q' whenever δ(q, a) = q' and λ(q, a) = y. We extend δ and λ to words in I^* in the usual inductive way: δ^* Q × X^* → Q λ^* Q × X^* → Y^* δ^*(q, ) = q λ^*(q, ) = δ^*(q, aw) = δ^*(δ(q, a), w) λ^*(q, aw) = λ(q, a)·λ^*(δ(q, a), w) When convenient, we write δ and λ instead of δ^* and λ^*. We define the semantics of a Mealy machine M = (Q, X, Y, q_0, δ, λ) as the function M X^* → Y^*, assigning to each input word its output: M(w) = λ^*(q_0, w) §.§ Projections In general, we can compose the output function λ of a Mealy machine with any function f Y → Z to obtain a Mealy machine with outputs in Z instead of Y. Formally, given a Mealy machine M = (Q, X, Y, q_0, δ, λ) and a function f Y → Z, the composition is M^f = (Q, X, Z, q_0, δ, f ∘λ). For a Mealy machine M with output set Y and function f Y → Z, we have: M^f = f^* ∘M For our purpose of projecting onto a single output y ∈ Y, we use the function y which is defined by y(x) = 1 if x = y and y(x) = 0 otherwise. Given a Mealy machine M and an output y ∈ Y, its projection onto y is defined by M^y and will simply be denoted by M^y. Its set of outputs is {0, 1}. Note that M^f may have pairs of states which are behaviourally equivalent, while not being behaviourally equivalent in M. This fact provides an opportunity to reduce the size of a system we are trying to learn. When composing with a function f, we might lose information, as certain outputs will not be distinguishable anymore. With y we almost definitely lose information because all outputs besides y will be indistinguishable. To cope with this, we consider the set of all projections {y}_y ∈ Y and this ensures we can reconstruct M from its projections. In order to reconstruct the Mealy machine from its projections, we introduce the following type of composition, which takes the product of all the state spaces. Let I be an index set and M_i = (Q_i, X, Y_i, q_0,i, δ_i, λ_i) be Mealy machines for each i ∈ I on a common input X. Let Y be any set and f_i Y → Y_i be functions where Y_i are the output sets of M_i. We define the composition Π_i ∈ I (M_i, f_i) of M_i along ⟨ f_i ⟩_i ∈ I to be a Mealy machine Π_i ∈ I (M_i, f_i) = (Π_i ∈ I Q_i, X, Y, ⟨ q_0,i⟩_i ∈ I, δ', λ'), where δ' and λ' are given by the following rule: at (0, 0) (top) f_i (y) = y_i q_i q_i' ∀ i ∈ I; at (0, -0.8) ⟨ q_j ⟩_j ∈ I⟨ q_j' ⟩_j ∈ I; (top.south west) – (top.south east); Note that the rule for the transition structure is not always well-defined. If the machines M_i are constructed from M^f_i, we can ensure that there is at least one transition for each state and input. Additionally, if the functions f_i are jointly injective, then there is at most one transition for each state and input. So under these restrictions, the composed Mealy machine is completely specified and deterministic. A set of functions {f_i Y → Z_i}_i ∈ I on a common domain is jointly injective if ⟨ f_i ⟩_i ∈ I Y →Π_i ∈ I Z_i is injective. For a Mealy machine M, if the set {f_i}_i ∈ I is jointly injective, then the Mealy machine Π_i ∈ I (M^f_i, f_i) is well-defined and Π_i ∈ I (M^f_i, f_i) = M The set {y Y →{0, 1}}_y ∈ Y is jointly injective. §.§ -Learning We briefly explain the basics of the algorithm by <cit.>. The aim of is to construct a Mealy machine M from a given function L X^* → Y^* such that M = L. The function L is “closed-box” and can only be interacted with as an oracle, meaning that L(w) can be queried for each individual input word w. Additionally, the algorithm uses an equivalence oracle to check whether an hypothesised machine M is correct. If the hypothesis is incorrect, i.e., M≠ L, obtains a counterexample w ∈ X^*. The following definitions are similar to the original paper on but specialised to Mealy machines. An observation table (for L) is a tuple (S, E, T) where S ⊆ X^* is a finite prefix-closed set of prefixes, E ⊆ X^+ is a finite set of suffixes, and T is a function T S ∪ S·X → (Y^*)^E such that T(s)(e) = L(se). An observation table (S, E, T) is * closed if for all s a ∈ S · X there is s' ∈ S such that T(s') = T(s a), * consistent if for all s, s' ∈ S with T(s) = T(s') we have T(s a) = T(s' a) for all a ∈ X. § OUTPUT DECOMPOSED LEARNING Before describing our algorithm, we adopt the above definitions of the observation table and its properties to our setting of projections. Given an observation table OT = (S, E, T) and an output y ∈ Y, we define the projection of the table onto y by y OT = (S, E, y T) where (y T)(s)(e) = y(T(s)(e)). An observation table OT is * output-closed if y OT is closed for every y ∈ Y, * output-consistent if y OT is consistent for every y ∈ Y. These new notions are related to the regular closedness and consistency properties for observation tables in the following way. Let OT be an observation table, then: * OT is closed OT is output-closed, * OT is consistent OT is output-consistent. An output-closed and output-consistent observation table defines a unique Mealy machine for every output. However, these machines may be in conflict with each other. There may be an input for which all components give the output 0, or there may be an input for which multiple components give the output 1. Therefore, we require that the components are consistent with each other. A family of Mealy machines {M^y}_y ∈ Y with outputs in {0, 1} is component consistent if for every input word w ∈ X^+ there is a exactly one output y ∈ Y such that (M^y(w)) = 1. Given a component-consistent family of Mealy machines {M^y}_y ∈ Y with outputs in {0, 1}, the unique Mealy machine they represent is M = Π_y ∈ Y (M^y, y). §.§ The Algorithm alg:olstar describes the algorithm in detail. It mimics but with closedness and consistency replaced by output-closedness and output-consistency. Additionally it has to check component-consistency. The equivalence queries may only ask whether the (composed) hypothesis H is equivalent to M; the teacher does not check the individual components H^y. [tb] Input alphabet X, MAT providing MQs and EQs Mealy machine equivalent to the target M S ←{}; E ← X; Initialize T with MQs, add observed outputs to Y EQ(H) = YES (S, E, T) is not output-closed or output-consistent (S, E, T) is not output-closed find y ∈ Y, s a ∈ S · X such that there is no s ∈ S with y T(s a) = y T(s) add s a to S fill new rows of T with MQs, add newly observed outputs to Y (S, E, T) is not output-consistent find y ∈ Y, s, s' ∈ S, a ∈ X, e ∈ E such that y T(s) = y T(s') but y T(s a)(e) ≠y T(s' a)(e) add a e to E fill new columns of T with MQs, add newly observed outputs to Y create family of Mealy machines {H^y}_y ∈ Y based on (S, E, T) {H^y}_y ∈ Y is not component-consistent find input w for which zero or multiple components output 1 add suffixes of w to E until the defect is fixed fill new column with MQs, add newly observed outputs to Y create Mealy machine H = Π_y ∈ Y (H^y, y) EQ(H) = w add suffixes of w to E until the defect is fixed fill new column with MQs, add newly observed outputs to Y return H We do not require that the set of outputs Y is known beforehand. We can observe the outputs that are added to the table, and use those as our output alphabet. Since our hypothesis can never return an output that we have not seen previously, the teacher can never return 𝑌𝐸𝑆 for an equivalence query until we have seen all possible outputs of M. There are some optimisations that can be done to reduce the number of queries we need to ask to learn M. We will mention two below. Since there may be multiple output-closedness or output-consistency defects at once, our goal should be to resolve all of them, instead of solving the first defect we find. Trying to do this in the fewest number of tries reduces to the hitting set problem, which is NP-complete <cit.>. Therefore, we choose the word that fixes the most defects in one go. This is a known approximation to the hitting set problem which works well in practice <cit.>. It can happen that our table is not only output-closed, but is also regularly closed. In this case, we immediately create the hypothesis and ask an equivalence query. This skips the checks for output-consistency and component-consistency. It should be noted that the table is always regularly consistent, since we never add a row to S that is equivalent to one that is already in S. § EMPIRICAL EVALUATION In this section we present preliminary experiments we carried out to evaluate how well works. The experiments were run using our implementation[The code is available at <https://github.com/SCRK16/OLStar>.] of the algorithm using the LearnLib framework <cit.>. We use two sets of benchmarks. The first are a set of Mealy machines where each machine consists of multiple, randomly generated, smaller machines sharing an input alphabet {a, b}, but all with unique outputs. Special inputs L and R allow for switching between which of the smaller machines is active. We expect to do well on this set of benchmarks, since the machines can be split up into their smaller components. For example, one of the machines has 10,000 states, but the projections only have a combined 95 states. The second set of benchmarks are the ones designed by <cit.>. These benchmarks consist of Mealy machines that are the parallel interleaving of multiple smaller automata. The components have between 2 and 13 states. The interleavings then have between 4 and 4860 states. The interleavings can be decomposed into their components by looking at their inputs. It should be noted that many of the components share outputs. As such, we expect that may not always be able to find an effective decomposition based on outputs. To measure how well the algorithm does, we ran it on these benchmarks and compared it to . For the equivalence queries we used randomised testing based on the -method by <cit.> as implemented in LearnLib. We counted the number of membership queries and testing queries needed to learn the automata, excluding the last equivalence query. We also counted the number of input symbols needed for both the membership queries and testing queries. Both the learning and testing queries had their own cache, so repeated queries are only counted once. We compare the number of symbols required in fig:comparison. More statistics can be found in apd:figures. §.§ Results fig:comparison shows the number of symbols required to learn a model for each instance. The dashed plot shows the line where and pose the same number of queries, for every instance below that line we see that performs better. In the benchmark of <cit.>, does not perform better than , this is to be expected as their components share the same outputs. The reason does worse (as opposed to equal to ) in these cases is because of the number of columns needed to make the table output-consistent. This is especially a problem when the target cannot be split up into smaller components, but the table is initially not closed in the regular sense. In these cases, does a lot of queries that ultimately lead to no benefit. When the target can be split up into smaller components, the extra columns needed for output-consistency are often compensated by the smaller number of rows needed to make the table output-closed. § CONCLUSION AND DISCUSSION We have presented the algorithm , which learns a Mealy machine using a decomposition based on outputs in the Minimally Adequate Teacher framework. Although works well in some instances, it is still a work in progress and many improvements are conceivable. The version of we have presented uses {y Y →{0, 1}}_y ∈ Y as the set of jointly injective functions. The generality of def:recomposition and lemma:reconstruction hint at the possibility of using other decompositions in . In fact, itself is an instance of this: it uses a set with a single projection {𝗂𝖽 Y → Y}. Similarly, the work by <cit.> is an instance where a good set of jointly injective functions is already known to the learner. A possible direction is to find the best possible set of jointly injective functions on-the-fly, although it seems that this is a computationally hard problem. Another possibility is to combine decomposition based on outputs with other forms of decomposition to reduce the size of components even further. The approach of <cit.> is dual to our approach, decomposing the machine based on its inputs instead of its outputs. We believe that these types of decomposition are orthogonal and could be used together in a learning algorithm. Finally, we could adjust the Minimally Adequate Teacher framework to allow equivalence queries for individual components. Since the components themselves are smaller, these equivalence queries may be more efficient than equivalence queries for the whole system. § APPENDIX §.§ Extra figures
http://arxiv.org/abs/2405.09497v1
20240515164530
Towards the limits: Sensing Capability Measurement for ISAC Through Channel Encoder
[ "Fei Shang", "Haohua Du", "Panlong Yang", "Xin He", "Wen Ma", "Xiang-Yang Li" ]
cs.IT
[ "cs.IT", "cs.NI", "eess.SP", "math.IT" ]
Interplay between the charge density wave phase and a pseudogap under antiferromagnetic correlations Leonardo Prauchner^1*, Eleonir Calegari^2, Julian Faúndez^1, Sergio Magalhaes^1 May 20, 2024 ==================================================================================================== Integrated Sensing and Communication (ISAC) is gradually becoming a reality due to the significant increase in frequency and bandwidth of next-generation wireless communication technologies. Therefore it becomes crucial to evaluate the communication and sensing performance using appropriate channel models to address resource competition from each other. Existing work only models the sensing capability based on the mutual information between the channel response and the received signal, and its theoretical resolution is difficult to support the high-precision requirements of ISAC for sensing tasks, and may even affect its communication optimal. In this paper, we propose a sensing channel encoder model to measure the sensing capacity with higher resolution by discrete task mutual information. For the first time, derive upper and lower bounds on the sensing accuracy for a given channel. This model not only provides the possibility of optimizing the ISAC systems at a finer granularity and balancing communication and sensing resources, but also provides theoretical explanations for classical intuitive feelings (like more modalities more accuracy) in wireless sensing. Furthermore, we validate the effectiveness of the proposed channel model through real-case studies, including person identification, displacement detection, direction estimation, and device recognition. The evaluation results indicate a Pearson correlation coefficient exceeding 0.9 between our task mutual information and conventional experimental metrics (e.g., accuracy). § INTRODUCTION The recent development of next-generation (5G-Advanced and 6G) communication technology motivates the integrated sensing and communication (ISAC) studies from various perspectives, such as the localization models <cit.>, the power and sub-channel allocation algorithms <cit.>, the dual-functional waveforms design <cit.> and so on. ISAC has been conceptualized, designed, and optimized for making communication and sensing functions complementary to each other. However, the goal of wireless communication is to minimize the impact of channel noise within the Shannon’s limit, while the goal of wireless sensing is to utilize the channel noise and then to identify the entities or the corresponding nature of the channel. The different goals bring the inherent trade-off between communication and sensing performance when integrating them together. To better address such resource competition from each other, it is essential to quantify the ability to communicate and sense under a given channel. The communication aspects can be bounded by Shannon's theorem, but the sensing part lacks unified and efficient theory support. Existing work on sensing theory suffers from deficiencies in different aspects, such as limited applicable tasks <cit.> and incomplete evaluation metrics <cit.>. In the present paper, we will theoretically analyze the sensing capability and measure it based channel encoder model. The core problem of wireless sensing is that of reproducing via the interfered signals at the receiver either exactly or approximately the interfering source, considered as the sensed object. As illustrated in Fig <ref>, the ISAC system can be formulated by 𝐘=𝐇𝐗+𝐍, where 𝐘 is the received signal, 𝐗 is the transmitted signal, 𝐇 is the channel status, and 𝐍 is the noise that might be introduced by sensed objects. Intuitively, the system sensing capability can be evaluated by analyzing how the received signals reflect the channel status, such as sensing mutual information I(𝐇;𝐘) <cit.>. But it doesn’t work well for the following two reasons. First, it is difficult to obtain complete information about the signal itself, we can only identify the sensory objects by analyzing several received signal features, such as the time-of-arrival (TOA), angle-of-arrival (AOA) and received signal strength (RSS). The relationship between the sensing capability of such features and the signal itself is ambiguous. For example, when containing the same level of noise, the AoA estimation error is related to the orientation of the antenna array <cit.>, as shown in Fig <ref>. Second, frequently the sensed objects have various types, including moving entities placed in the channel, temperature or humidity fields affecting the channel, etc. The sensing capability analysis must be designed to operate for all possible types, not just the one that will actually be chosen since this is unknown at the time of design. Fortunately, if the number of signal features and object types in the sets is finite, then the monotone function of the mapping between them can be regarded as a measure of the sensing capability when the pair is chosen from the sets. The most natural choice of such a monotone function is still mutual information for various reasons: * It is suitable to the communication theorem and thus easier to integrate. The ultimate goal of sensing capability assessment is to provide an optimization basis for the trade-off between communication and sensing in ISAC systems so that entropy-based methods can achieve calculations more efficiently. * It evaluates the amount of information contained in one random variable about another random variable, which is nearer to our intuitive feeling as to the proper measure for sensing: how much information is obtained in the observed signal features about the sensed object. In this paper, we propose a general sensing channel encoder model to help determine the sensing capability – the upper bound and lower bound of error in restoring the sensed object from given wireless signal features. Main contributions are as following: * We propose a sensing channel encoder model to describe the sensing system, and derive the fundamental limits of specific sensing objects under given signal features, in terms of a performance measure called discrete task mutual information (DTMI). This approach unifies such information from different features in a canonical form as a weighted sum associated with the weights characterizing the information intensity. * Based on DTMI, we first provide upper and lower bounds of sensing errors for ubiquitous sensing systems and give a sufficient condition for lossless sensing. It enhances the interpretability of current sensing systems and can be further used to guide the problem of resource allocation for communication and sensing in ISAC systems. * We validate the effectiveness of the proposed sensing system model in several real-world cases, including binary classification tasks such as Wi-Fi-based human identification and RFID-based displacement detection, and multi-classification tasks such as direction sensing based on electromagnetic signals and device identification based on traffic features. The experiment results show that the consistency between our proposed sensing capability evaluation method and the actual task results is up to 0.9 (Pearson correlation coefficient). The rest of the paper is organized as follows. Section <ref> reviews the related work. We introduce the sensing channel encoder model in Section <ref>. In Section <ref>, we give a theoretical explanation for some classical phenomena in sensing systems. Finally, we evaluate its performance in real examples in Section <ref>. § RELATED WORK §.§ Sensing systems based on communication devices. ISAC is widely acknowledged as a pivotal enabler for a myriad of emerging applications, encompassing smart manufacturing, smart homes, and smart cities <cit.>. The deployment of professional sensing equipment on a large scale is often impeded by their substantial size and high costs.In the context of the burgeoning Internet of Things (IoT), a multitude of endpoints, originally intended for communication purposes such as WiFi, speakers and microphones, RFID, among others, have gained prominence due to their abundance and cost-effectiveness in comparison to specialized equipment. Consequently, a growing number of researchers and practitioners are exploring the use of these devices for sensing tasks. These applications range from localization and trajectory tracking to material identification and health monitoring. (1) Localization and trajectory tracking. The proliferation of wireless devices, coupled with the development of wireless network infrastructure, has led to a significant increase in their deployment within both workplaces and homes. Recently, there has been a notable trend towards employing these communication devices for mobile trajectory tracking. Wi-Fi based systems <cit.> use Channel State Information/Received Signal Strength Indicator (CSI/RSSI) for localization <cit.>, gesture tracking <cit.>, gesture recognition <cit.>, etc. Within this context, Widar <cit.> quantifies the relationship between CSI dynamic changes and user location and speed to achieve an average position error of 25cm. RFID-based systems can achieve centimeter-level tracking accuracy using phase-based methods <cit.>. For instance, RF-IDraw <cit.> utilizes interference techniques to measure the relative phase between multiple RFID readers, while Tadar <cit.> achieves through-wall tracking by exploiting multipath signal variations caused by human movement. Tagoram <cit.> uses the concept of "virtual antennas" and phase holography to map measured phases to possible tag locations and calculates moving trajectories through phase changes. However, the positioning accuracy of these works is influenced by many factors, such as the number of antennas and noise levels. There is still a lack of theoretical means to quantify the impact of these factors on the results. (2) Material identification. In contrast to professional equipment, which can cost tens of thousands of dollars, radio frequency signal transceivers are comparatively less expensive and compact. This makes them more feasible for deployment in lightweight sensing scenarios such as homes, or large-scale scenarios like warehouses. For instance, we can utilize commercial WiFi signals to detect whether the purchased fruits are ripe <cit.>. In addition, compared with visible light, the frequency of wireless signals is lower, which makes them have better propagation performance in low light or non-line-of-sight environments. For example, for liquids placed in opaque containers, many radio frequency signal-based systems can identify the solution concentration with a granularity of 1%  <cit.>. (3) Health monitoring. Both heartbeat and respiratory behavior produce corresponding body-conducted sounds. Consequently, sound has emerged as a significant modality for the sensing of vital signs. Xiong et al. <cit.> have extended the effective distance of acoustic sensing by utilizing ubiquitous sound waves, achieving accurate personnel tracking, gesture tracking, eye movement tracking, etc. in multiple scenarios. For a long time, auscultation has been an important part of sleep and respiratory related research, so many works use microphones on mobile devices to capture the air-conducted sounds of respiration for snoring detection  <cit.>, sleep apnea detection  <cit.>. Han et al. <cit.> employ in-ear microphones to facilitate sense and user identity authentication via respiratory behavior analysis. Owing to its non-contact characteristic, radio frequency signals have the potential to alleviate pressure on users during monitoring or sensing processes. This has led to a surge in academic interest in this field in recent years. Wang et al. <cit.> first propose the Fresnel zone theory of WiFi signal sensing in free space, theoretically exploring the impact of human breathing depth on the reception of radio frequency signals. In addition, Liu et al. <cit.> explore the feasibility of using RFID tags to achieve non-contact chest displacement. However, unlike the performance of communication systems that can be reasonably assessed using theoretical metrics like channel capacity, the current evaluation of sensing system performance largely relies on experimental approaches. §.§ Performance measurement of the ISAC system. Traditional research often treats “communication" and “sensing" as two distinct systems. However, a growing body of recent studies has demonstrated that these two concepts are intrinsically interconnected in the context of information theory, forming an intriguing “odd couple" <cit.>. In recent years, a significant number of researchers have dedicated their efforts to examining the theoretical performance of systems through the lens of synesthesia. A typical system modeling method is the linear Gaussian model, which is <cit.> 𝐘 = 𝐇𝐗 + 𝐍, where 𝐘 is the received signal, 𝐗 is the transmitted signal, 𝐇 is the channel matrix, and 𝐍 is the noise matrix. From the perspective of communication, the fundamental problem is how to accurately estimate the transmitted signal 𝐗 from the received signal 𝐘. In accordance with Shannon's second law, the ultimate performance of a channel is dictated by its capacity. This capacity is intrinsically linked to the mutual information between 𝐗 and 𝐘, denoted as I(𝐗;𝐘). From the perspective of sensing, the basic problem is to estimate 𝐇. Similarly, researchers utilize the mutual information between 𝐇 and 𝐘 to characterize system performance <cit.>. However, as introduced in Sec. <ref>, we are increasingly unsatisfied with merely sensing the channel response 𝐇 and ubiquitously sensing with communication devices. In this case, mutual information I(𝐘;𝐇) cannot fully characterize the sensing performance of the system. For instance, MapFi <cit.> shows that under the same estimation level for 𝐇, the accuracy of localization using angle of arrival varies with different orientations of the antenna array. Moreover, traditional channel theory is often based on Shannon's second law, which assumes that the random variables used for encoding are independent and identically distributed, a condition difficult to meet when conducting ubiquitous sensing. Therefore, we need to construct a new channel model to adapt to the increasingly developed integrated communication and sensing systems. § SENSING CHANNEL ENCODER MODEL Sensing of discrete status finds broad applications in both industrial production and daily life scenarios, encompassing areas such as material identification, image recognition, and human presence detection. In this section, we establish a discrete sensing channel encoder model to analyze the system's sensing capability. Our analysis reveals that, with the status to be sensed being fixed, the DTMI directly dictates the lower and upper bounds of the expected sensing error. Proceeding forward, we first introduce the definitions related to the discrete sensing channel encoder model, followed by an exploitation of DTMI to analyze the lower and upper bounds of the expected sensing error. §.§ Model definitions. A typical sensing process often comprises several components: the target status to be sensed (W), the feature (X^n) designed to sense the status, the sensing channel embedding (Y^n) obtained through the sensing system, and the outcome (Ŵ) derived after processing the signal. We analyze the sensing system as shown in the Fig. <ref>. The status W has m possible values, which together form the set 𝒲={w_1,⋯,w_m}. The probability that the target is in the i-th status is (W=w_i)=p(w_i). To facilitate the sensing of statuses, we construct n-dimensional independent features X^n to represent the status W. Given the status as w_i, the feature X^n(w_i) is given by X^n(w_i)=[X_1(w_i),⋯, X_n(w_i)]. Upon transmission and subsequent data processing, the receiver is likely to receive this feature with a probability denoted as p(y^n|x^n), which we represent as Y^n. Subsequently, the receiver assesses the condition of the sensed target utilizing the acquired features Y^n and decoding rules g. The result is given by Ŵ=g(Y^n). For instance, in a task of material identification using radio frequency (RF) signals, the targets possess varying materials (W). We exploit the characteristic that different materials affect RF signals differently to design feature X^n, which are related to the amplitude of RF signals. Then, using a receiver that captures electromagnetic waves in the space and processes them according to a sensing algorithm, we acquire the sensing channel embedding denoted as Y^n. Finally, based on certain decision rules, we correlate Y^n with the corresponding X^n to ascertain the result Ŵ. To quantify the performance of the sensing system, we initially define the “conditional error probability" and the “expected value of the error". The former represents the probability that the sensed result does not match the actual status w_i given that the target status is w_i, while the latter signifies the expectation of the conditional error probabilities. Furthermore, we introduce several definitions (Definition <ref>, <ref>, and <ref>) to facilitate our analysis of the upper and lower bounds of the expected error value. The discrete task mutual information (DTMI) is defined as the mutual information between the feature X^n and the channel embedding Y^n, i.e., I(X^n;Y^n). The conditional error probability ξ _i when the target status is w_i is defined as: ξ _i = (Ŵ≠ w_i|W=w_i). The expected value of the error, denoted as P_E^n, is articulated as follows: P_E^n = ∑_i=1^m p(w_i)ξ _i. If a sequence X^n=[X_1,⋯,X_n] of length n, where each dimension is statistically independent of one another, we refer to sequence X^n as an n-dimensional independent sequence. Their joint probability density function is given by: p(x^n)=Π_i=1^n p(x_i). For two n-dimensional independent sequences X^n and Y^n, if the joint distribution of (X^n,Y^n) is given by p(x^n,y^n)=Π_i=1^np(x_i,y_i), we refer to (X^n,Y^n) as a n-dimensional jointly independent sequence. The jointly matching set B_ε^(n) of jointly independent sequence is defined as: B_ε^(n) = {(X^n,Y^n)∈𝒳^n ×(Y)^n: . |-1/nlogp(x^n)-1/n∑_i=1^n H(X_i)|<ε |-1/nlogp(y^n)-1/n∑_i=1^n H(Y_i)|<ε |.-1/nlogp(x^n,y^n)-1/n∑_i=1^n H(X_i,Y_i)|<ε}. where (X^n,Y^n) is the n-dimensional jointly independent sequence. H(X_i), H(Y_i), and H(X_i,Y_i) are the entropy of X_i, Y_i, and (X_i,Y_i), respectively. §.§ Lower bound on expected error. The current evaluation of sensing systems' performance predominantly relies on experimental assessments. While experimental evaluations are highly effective in gauging system performance, conducting rigorous controlled experiments in real-world scenarios is exceedingly challenging. Consequently, in many instances, it is difficult to ascertain whether the failure to achieve the desired accuracy is due to inadequately designed sensing features or simply unforeseen interference during the data acquisition process. In this section, we give a lower bound on the expected error value based on DTMI, which helps us analyze the ultimate performance of the sensing system. For a sensing task W with m statuses, we use n independent features to describe the status of the target. The expected value of the error P_E^n satisfies the following lower bound: P_E^n+ H(P_E^n)/log m≥H(W)-I(X^n;Y^n)/log m, where H(P_E^n)=-P_E^n log P_E^n - (1-P_E^n)log(1-P_E^n). We first prove that the sensing model we defined forms a Markov chain. Then we combine Fano's inequality <cit.> and some properties of Markov chains to give a lower bound for P_E^n. For the sensing model described in Section <ref>, the target status W, the feature X^n, the received channel embedding Y^n, and the sensing result Ŵ form two Markov chains, i.e., W→ X^n→ Y^n →Ŵ and Ŵ→ Y^n→ X^n→ W. For a Markov chain, some simple consequences are as follows <cit.>: * If X→ Y→ Z is a Markov chain, Z,Y,X form a Markov chain, i.e., Z→ Y→ X. * For three random variables X, Y, and Z, if Z=f(Y), then X,Y,Z form a Markov chain, i.e., X→ Y→ Z. According to the deification of sensing model, the feature is a function of the target status, i.e., X^n=f(W); the sensing feature Y^n is a function of the status feature X^n, i.e., Y^n∼ p(y^n|x^n); and the sensing result Ŵ is a function of the sensing feature Y^n, i.e., Ŵ=g(Y^n). Therefore, the target status W, the feature X^n, the channel embedding Y^n, and the sensing result Ŵ form a Markov chain, i.e., W→ X^n→ Y^n →Ŵ. Besides, we have Ŵ→ Y^n→ X^n→ W. According to the Fano's inequality <cit.>, if three random variables X,Y,Z form a Markov chain, i.e., X→ Y → Z, we have: (X≠ Z) ≥H(X|Z)-H((X≠ Z))/log(|𝒳|). where H(X|Y) is the conditional entropy of X given Y. For the Markov chain W→ X^n→ Y^n →Ŵ, according to the total probability formula and Ferno's inequality, we have: P_E^n = (Ŵ≠ W) ≥H(W|Ŵ)-H(P_E^n)/log(|𝒲|) = H(W)-I(W;Ŵ)-H(P_E^n)/log m According to the Data-processing inequality <cit.>, if three random variables X, Y, and Z form a Markov chain, X→ Y→ Z, then we have I(X;Z)≤ I(X;Y), where I(X;Y) is the mutual information between X and Y. For the Markov chain W→ X^n→ Y^n →Ŵ, we have I(W;Ŵ)≤ I(W;Y^n). And for the Markov chain Ŵ→ Y^n→ X^n→ W, we have I(Y^n;W)≤ I(Y^n;X^n). As a result, we have: I(W;Ŵ) ≤ I(X^n;Y^n). Substituting Equ. (<ref>) into Equ. (<ref>), we have: P_E^n+H(P_E^n)/log m≥H(W)-I(X^n;Y^n)/log m. §.§ Upper bound on expected error. In communication, Shannon's second theorem <cit.> posits that for a given signal, error-free transmission can always be achieved as long as we employ code words that are sufficiently long to encode the message. This issue is equally pertinent in sensing: when the dimensionality n of the feature is sufficiently large, what is the upper bound on the expected error? In this section, we derive an upper bound based on DTMI (Theorem <ref>) and provide a sufficient condition under which error-free sensing can be attained (Theorem <ref>). For a sensing task with m statuss, we use n independent features to describe the status of the target. For sufficiently large n, the expected value of the error P_E^n satisfies the following upper bound: P_E^n≤ε + ∑_k=1^m p(w_k) ∑_j≠ k^m 2^3nε-∑_i=1^n I(X_i(w_j);Y_i(w_k)) The expected error P_E^n is influenced by the decision rule g, with the maximum likelihood criterion being a commonly employed rule in practical scenarios. However, for the sake of facilitating analysis, we introduce a novel decision rule defined in conjunction with the matching set B_ε^(n) (Definition <ref>), where in the result Ŵ is determined as w_i whenever the channel embedding Y^n and the feature X^n(w_i) corresponding to the message w_i form a jointly matching set. Under this rule, we first estimate the probability of X^n, Y^n constituting a jointly matching set (Lemma <ref> to <ref>) and subsequently present a suboptimal upper bound on the expected error (it is noted that employing alternative decision criteria might yield tighter upper bounds). The decoding rule g. To obtain sensing outcomes from Y^n, we employ the following rule g: * We declare that the target statue is w_i if (X^n(w_i),Y^n) ∈ B_ε^(n) and there is no other status w_j such that (X^n(w_j),Y^n) ∈ B_ε^(n). * If there are multiple statuss w_j such that (X^n(w_j),Y^n) ∈ B_ε^(n) or there is no status w_i such that (X^n(w_i),Y^n) ∈ B_ε^(n), an error is declared. To estimate the probability of an event occurring, we first prove the following lemma about matching sets. For a n-dimensional jointly independent sequence (X^n,Y^n) and a matching set B_ε^(n), when n →∞, the probability that (X^n,Y^n) is in the matching set B_ε^(n) is close to 1, which is ((X^n,Y^n)∈ B_ε^(n)) → 1. According to the Chebyshev's Law of Large Numbers, when the number of observations n is sufficiently large, the sample mean of n independent and identically distributed random variables converges in probability to their common expected value. Observing that the entropy is essentially the expectation of the logarithm of the reciprocal of probabilities, we leverage these two premises to underpin our proof. According to Chebyshev's Law of Large Numbers, given ε>0, there exists n_1, so that for all n>n_1, the following holds: P_1 =( |-1/nlog p(X^n)-1/n∑_i=1^nH(X_i) | ≥ε) = ( |1/n∑_i=1^nlog p(X_i)-1/n∑_i=1^n𝔼(log p(X_i)) | ≥ε)<ε/3. Similarly, there exists n_2 and n_3, so that for all n>n_2, the following holds: P_2=( |-1/nlog p(Y^n)-1/n∑_i=1^nH(Y_i) | ≥ε)<ε/3, and for all n>n_3, the following holds: P_3 =( |-1/nlog p(X^n,Y^n)-1/n∑_i=1^nH(X_i,Y_i) | ≥ε) <ε/3. Let n_0=max{n_1,n_2,n_3}, then for all n>n_0, the following holds: ((X^n,Y^n)∈ B_ε^(n)) > 1-(P_1+P_2+P_3) = 1-ε. Going further, we consider the scenario where (X^n, Y^n) forms a jointly independent sequence (Definition <ref>), and we examine the probability of them constituting a joint matching set. Initially, drawing upon Definition <ref>, we estimate the counts of elements in both the matching set and the jointly matching set, which are related to the entropy. Specifically, the number of elements in the matching set for X^n and Y^n are approximately 2^∑_i=1^n H(X_i) and 2^∑_i=1^n H(Y_i), respectively, while the count of their joint matching sequences is roughly 2^∑_i=1^n H(X_i,Y_i). Building on this foundation, Lemma <ref> furnishes an estimate for the probability that (X^n, Y^n) forms a joint matching set. The upper bound of the number of elements in the matching set of jointly independent sequence B_ε^(n) is given by: | B_ε^(n)| ≤ 2^nε+∑_i=1^n H(X_i,Y_i), where H(X_i,Y_i) is the entropy of (X_i,Y_i), and |.| denotes the number of elements in the set. According to the Definition <ref>, if (X^n,Y^n) ∈ B_ε^(n), we have: p(x^n,y^n) ≥ 2^-nε-∑_i=1^n H(X_i,Y_i). As a result, 1 = ∑_(x^n,y^n) ∈𝒳^n ×(Y)^n p(x^n,y^n) ≥∑_(x^n,y^n) ∈ B_ε^(n) p(x^n,y^n) ≥ 2^-nε-∑_i=1^n H(X_i,Y_i)|B_ε^(n)|. Therefore, we have |B_ε^(n)| ≤ 2^nε+∑_i=1^n H(X_i,Y_i). For a n-dimensional jointly independent sequence (X̂^n,Ŷ^n) and a matching set B_ε^(n), if (X̂^n,Ŷ^n)∼ p(x^n)p(y^n), i.e., X̂^n and Ŷ^n are independent with the same marginals as p(x^n,y^n), then ((X̂^n,Ŷ^n)∈ B_ε^(n)) ≤ 2^3n ε-∑_i=1^n I(X_i;Y_i), where I(X_i;Y_i) is the mutual information between X_i and Y_i. According to the definition of the jointly matching set, we have: log (p(x^n)) ≤ nε -∑_i=1^n H(X_i) log (p(y^n)) ≤ nε -∑_i=1^n H(Y_i) The probability of a joint independent sequence (X̂^n,Ŷ^n) in B_ε^n is given by: ((X̂^n,Ŷ^n) ∈ B_ε^(n)) = ∑_(x^n,y^n) ∈ B_ε^(n) p(x^n)p(y^n) = |B_ε^(n)| 2^nε -∑_i=1^n H(X_i) 2^nε -∑_i=1^n H(Y_i) ≤ 2^3nε+∑_i=1^n (H(X_i,Y_i)-H(X_i)-H(Y_i)) = 2^3n ε-∑_i=1^n I(X_i;Y_i). We first estimate the probability that the sensing result Ŵ is wrong when the target status is W=w_i. We can assume without loss of generality that the target status is w_1. We consider the following events: C_i={(X^n(w_i),Y^n(w_1)) ∈ B_ε^(n)}, i∈{1,⋯,m}. where y^n(1) is the received channel embedding when the target status is w_1. Based on the decision rule and Definition <ref>, the conditional error probability at this point is given by: ξ _1 = Pr(C̅_̅1̅⋃_i=2^mC_i) ≤ Pr(C̅_̅1̅)+∑_i=2^mPr(C_i), where C̅_̅1̅ is the complement of C_1. According to Lemma <ref>, we have: (C̅_̅1̅) ≤ε. Besides, for j∈{2,⋯, m}, the feature X^n(w_j) is independent of X^n(w_1), so is X^n(j) and Y^n(w_1). Hence, according to Lemma <ref>, we have: (C_j) ≤ 2^3nε-∑_i=1^n I(X_i(w_j);Y_i(w_1)). Substituting the above results into Eq. (<ref>), we have: ξ _1 ≤ε + ∑_j=2^m 2^3nε-∑_i=1^n I(X_i(w_j);Y_i(w_1)). According to Definition <ref>, we have: P_E^n = ∑_k=1^m p(w_k)ξ _k ≤ε + ∑_k=1^m p(w_k) ∑_j≠ k^m 2^3nε-∑_i=1^n I(X_i(w_j);Y_i(w_k)). Finally, Theorem <ref> provides a sufficient condition for error-free sensing, indicating that for achieving error-free sensing, a sufficient number of features with high DTMI must be identified [This requirement diverges from the conclusion in communications, where merely having a sufficient number of codewords is typically sufficient.]. For a sensing task with m=2^nR statuss, we use n independent features to describe the status of the target. For a sufficiently large n, if R satisfies the following equation, R < I(∑_k≠ jX̅^n(w_k)/m-1;Y̅^n(w_j)), where X̅(w_j) and Y̅(w_j) is the mean X^n(w_j) and Y^n(w_j), we have ξ_j → 0. In Theorem <ref>, we derive an upper bound estimate for the expected error P_E^n. Capitalizing on the convexity property of mutual information, we leverage Jensen's inequality to provide a sufficient condition for a tight error estimation. This approach ensures that our estimate effectively captures the inherent relationship between the variables, harnessing the convexity to yield a more robust and accurate analysis of the error's expected magnitude without loss of generality. According to the Jensen's inequality, if f is a convex function and X is a random variable, we have: f(𝔼(X)) ≤𝔼(f(X)). Since the mutual information is a convex function <cit.>, we have: n I(X̅^n;Y̅^n) ≤ n ∑_i=1^n 1/nI(X_i;Y_i), where X̅^n and Y̅^n is the mean of X^n and Y^n. As a result, for a j∈{1,⋯,m, the Equ. (<ref>) can be rewritten as: ξ_j ≤ε + ∑_k≠ j^m 2^3nε-n I(X̅^n(w_k);Y̅^n(w_j)). Since functions 2^x and I(X;Y) are both convex functions, and function 2^x is monotonically increasing, 2^I(X;Y) is also a convex function. According to the Jensen's inequality, we have: (m-1)∑_k ≠ j^m 1/m-12^nI(X̅^n(w_k);Y̅^n(w_j))≥ (m-1)2^nI(∑_j≠ k^m X̅^n(w_k)/m-1;Y̅^n(w_j)). As a result, for m=2^nR and sufficiently large n, if R satisfies the Equ. (<ref>), we have: ξ_j ≤ε + 2^3nε2^n(R-I(∑_j≠ k^m X̅^n(w_k)/m-1;Y̅^n(w_j)))→ 2ε. § COROLLARY Previous excellent sensing systems have summarized many valuable experiences, such as multi-modal systems tend to achieve better sensing performance. However, these experiences currently lack theoretical explainability. In this section, we employ sensing channel encoder model and DTMI as tools to attempt to explain some classic phenomena. §.§ Why do multimodal systems tend to exhibit superior performance? In a communication system, Shannon's second theorem stipulates that the error rate can be reduced to an arbitrary low level, provided that the codewords are sufficiently lengthy. Similarly, many previous research works have shown that using multi-modality for sensing helps achieve better performance, which can be explained by the theorem we proved previously. In this subsection, we will theoretically explain why multi-modal sensing systems are more capable of achieving superior sensing performance based on the DTMI. Fig. <ref> shows a schematic diagram of a multi-modal system. For the target state W, we use n modalities to sense it. The channels of different modalities are directly independent of each other. For example, in order to identify the material of the target, we use three modalities: vision, sound wave, and radio frequency signal for sensing. The transmission of visual signal, sound wave signal, and radio frequency signal is independent of each other. According to the Theorem <ref>, when the number of states m remains unchanged, the lower bound of the expected value of the error P_E^n is related to I(X^n;Y^n). Note that both mutual information and conditional mutual information are non-negative. When we add a new mode, we have I(X^n+1;Y^n+1) = I(X^n,X_n+1;Y^n,Y_n+1) = I(X^n;Y^n) + I(X^n;Y_n+1|Y^n) + I(X_n+1;Y^n+1|X^n) ≥ I(X^n;Y^n), where X^n+1=[X_1,X_2,…,X_n,X_n+1] and Y^n+1=[Y_1,Y_2,…,Y_n,Y_n+1]. Therefore, the more modalities we use, the larger the mutual information I(X^n;Y^n), the lower the theoretical lower bound of the expected value of the error. §.§ How do we compare which of two sensing features is better? In the process of designing a sensing system, it is crucial to carefully craft the sensing features. To show that feature X is better than feature X', we usually need to run many micro-benchmarks. While experimental validation is a compelling method of verification, it frequently involves intricate setup procedures and can be time-consuming. Moreover, due to the challenge of deploying tests across a wide range of scenarios, it is often difficult to ascertain whether feature X is truly superior to feature X' or if this conclusion holds only in specific contexts. In this paper, we propose DTMI which can reflect the performance of sensing features to a certain extent. Specifically, we consider two features X and X'. After passing through the sensing channel, their corresponding channel embeddings are Y and Y', respectively. According to Theorem <ref> and Theorem <ref>, both the upper and lower bounds of the expected error are related to the DTMI. If the DTMI I(X;Y)>I(X';Y'), the upper and lower bounds of the expected value of the error P_E will be reduced, which means that it is easier to achieve good performance using X as sensing features. This necessitates alternative approaches, beyond experimental validation, to assess the performance of designed sensing features. §.§ Is data pre-processing a “cure-all" solution? Since data contains a lot of noise and interference, sensing systems usually include a data preprocessing module when they are designed, which is used to improve data quality for subsequent processing. Previous studies have shown that preprocessing can often improve sensing performance. Now our questions are: can we accomplish any sensing task with arbitrary accuracy through sufficiently sophisticatedly designed data preprocessing algorithms? We refine the sensing channel encoder model depicted in Fig. <ref>, and the result is illustrated in Fig. <ref>. Specifically, for the n-dimensional independent features X^n, after transmission through an actual physical channel, we obtain an l-dimensional data D^l at the receiver. For instance, to localize a target using radio frequency (RF) signals, we employ angle of arrival (AoA) as a feature. At the receiver, what we receive is the amplitude and phase of the RF signals, which are D^l. Subsequently, we subject the received data D^l to data preprocessing, yielding a processed data D̂^l. Then we utilize the sensing algorithm to process the data D̂^l to obtain the channel embedding Y^n, and finally use the judgment algorithm to obtain the result Ŵ. In particular, when no data preprocessing is used, it is equivalent to D̂^l=D^l. If the following equation holds, H(W)-I(X^n;D^l) > 1, lossless sensing cannot be achieved simply by improving the effect of data preprocessing. According to the definition Markov chain, the channel shown in Fig. <ref> constitutes a Markov chain W→ X^n → D^l →D̂^l → Y^n →Ŵ. Note that “whether the sensing result is correct" is a binary event, so we have H(P_E^n)≤ 1. According to the Theorem <ref> and the Data-Processing Inequality, we have P_E^n ≥H(W)-I(X^n;Y^n)-H(P_E^n)/log m≥H(W)-I(X^n;D^l)-1/log m≥ 0, if Equ. (<ref>) holds. Therefore, lossless sensing cannot be achieved simply by improving the effect of data preprocessing. § CASE STUDY We illustrate the role of system performance evaluation based on sensing channel encoder model and DTMI through several case studies. We begin by examining the application of DTMI in binary classification tasks, using examples of human detection in home settings via WiFi and appliance cabinet door displacement detection in industrial scenarios via RFID. For multi-class classification, we consider two instances: the classic sensing problem in ISAC systems – direction estimation, and device identification based on an open-source traffic dataset. The results demonstrate that across different cases, the Pearson correlation between the trend of DTMI changes and that of accuracy fluctuations exceeds 0.9. Furthermore, DTMI can provide estimates of upper and lower bounds for sensing system errors, which is beneficial for optimizing and balancing ISAC systems. §.§ Binary classification task. (1) Human detection based on WiFi devices. Indoor human detection plays a pivotal role in services such as elderly monitoring. In particular, device-free passive human detection has garnered significant attention in recent years. While methods based on infrared, pressure sensors, and the like have been applied to human detection, they either rely on specialized hardware or come at a higher cost. Moreover, vision-based and infrared-based methods are only effective within line-of-sight (LOS) coverage. Wi-Fi devices, being one of the most widely deployed radio frequency devices, have led to the implementation of numerous radio frequency sensing systems around them. In recent years, with the advancement of wireless sensing technology, Wi-Fi-based approaches have proven to be a promising method for indoor human detection. We deployed an experiment based on Wi-Fi devices in a residential setting and estimated mutual information using numerical methods. The experimental results indicate that DTMI exhibits a similar trend to accuracy. In this case study, their Pearson correlation coefficient exceeds 0.9. The experimental setup is depicted in Fig. <ref>, where we conducted experiments in a 4m×6m office using an ESP32 device as both transmitter and receiver, each equipped with a single antenna. Additionally, a camera was placed within the environment to capture video footage for recording ground truth. The sampling rate of the ESP32 is set to 100Hz. Ten volunteers are invited to participate in the tests. Each data acquisition session lasted 10 minutes: the first 5 minutes ensured the room is empty, followed by 5 minutes with human activity (walking) inside the room. State W has two possible values: “personnel present" and “personnel absent". After obtaining CSI data, we initially sliced the data, then performed data preprocessing to eliminate outliers and apply filtering. Finally, channel embedding Y is extracted from this processed data and compared against empirical thresholds to ascertain the presence or absence of individuals, which is the result Ŵ. The entire data processing procedure is illustrated in Fig. <ref>. The corfficient of variation of k-th subcarrier is δ_Δ T^k=σ_Δ T^k/μ_Δ T^k, where Δ T is the width of the time window, μ_Δ T^k and σ_Δ T^k are the mean and standard deviation of the k-th subcarrier, respectively. And the channel embedding y is given by y = 1/n∑_i=1^1|δ^iΔ_T/δ^iΔ_T-1|, where n is the number of subcarriers. If y falls within the experiential threshold range, we consider the environment to be “person absent"; otherwise, it is determined to be person present. The entire data processing workflow is illustrated in Fig. <ref>. Here, the threshold range is [0.935, 1.065]. Figure <ref> shows a example of the channel embedding extraction process. In Fig. <ref>, the blue solid line illustrates the error rate of human detection as the width of the time window varies Δ T. The dashed lines of other colors represent the mutual information I(W;Ŵ) under different numerical estimation algorithms, namely KraskovStogbauerGrassberger1 <cit.>, KraskovStogbauerGrassberger2 <cit.>, GaoKannanOhViswanath <cit.>, and GaoOhViswanath <cit.>. The results demonstrate that the trend of accuracy change is highly consistent with the trend of mutual information change, indicating that in such tasks, DTMI can serve as an additional performance metric, complementing accuracy, to evaluate system performance. (2) RFID-based electrical cabinet door direction monitoring. Ensuring electrical safety is crucial during the manufacturing process. Take the electrical cabinet as an example; if its door is inadvertently opened without timely detection, there are potential safety hazards, including the risk of electrical fires and electric shock. In the field of terminal sensing in power systems, electromagnetic transformer-type sensors have traditionally dominated. In recent years, non-electric quantity sensing technologies such as vibration, stroke, arc light, and spectral sensing have gained widespread application in digital electrical equipment and power systems. However, these sensing technologies frequently depend on specialized sensors that boast high sensitivity and accuracy. These sensors are typically burdened with several drawbacks, including complexities in power supply, large size and weight, high energy consumption, vulnerability to electromagnetic interference, difficult installation processes, and exorbitant costs. Consequently, they fall short of meeting the requirements for the development of modern smart power equipment. Given the cost-effectiveness and ease of deployment of RFID tags, we have developed an algorithm for monitoring cabinet door status using multiple tags. Furthermore, we employ the mutual information of tasks, as proposed in this paper, to assess the system’s performance. We conduct relevant tests in a factory setting. For an industrial metal electrical cabinet (measuring approximately 1m×1m×2m) used in production, our objective is to monitor the status of the cabinet door. The RFID reader is ImpinJ Speedway R420 reader. The RFID system operates in the 920MHz∼926MHz. Two states W are defined: when the door opening angle is less than 5^∘, it is considered “closed"; otherwise, it is deemed “open". We affix several (1 to 3) anti-metal RFID tags onto the cabinet door and positioned the antenna within the cabinet body. The deployment configuration of the equipment is illustrated in Fig. <ref>. After collecting the RSSI (Received Signal Strength Indicator) from each tag, we perform differential processing against an initial value, followed by calculating the average of these differential values across multiple tags. If the average differential exceeds an empirically determined threshold (set here as 2.5), we conclude that the sensing result is “open"; otherwise, it is concluded as “closed". The detailed steps of data processing are depicted in Fig. <ref>. The results of the state monitoring are shown in Fig. <ref>. Due to the cabinet being made of metal, the electromagnetic waves suffer from severe multipath interference. Consequently, when only one tag is used, the stability of the data is poor, and the empirical threshold becomes almost unusable after the tag position shifts by just a few centimeters. This issue leads to an identification accuracy of less than 60%. This is well reflected by the mutual information I(W;Y^n) (n=1), which has a small value in this case. Since the spacing of the tags exceeds half a wavelength, their mutual influence is minimal, and thus we can approximately consider the reflection signals from different tags as independent of each other. Consequently, following corollary introduced in Sec. <ref>, as the number of tags increases, so does the mutual information. We employ GaoOhViswanath <cit.> method to estimate the mutual information, and the red line in Fig. <ref> illustrates its trend, which increases with the number of tags. As the mutual information increases, so does the accuracy of state identification. §.§ Multiple classification tasks. (1) Direction estimation based on Music algorithm and electromagnetic signal. Location sensing represents one of the most prevalent and fundamental tasks in the field. A plethora of superior systems have been developed utilizing location sensing. Nevertheless, for an extended period, there has been a dearth of methods other than experimental evaluations to assess the influence of numerous factors, including the distance between the target and both the transmitter and receiver, on localization accuracy. In this case study, we use direction estimation based on the Music algorithm (one of the most popular localization algorithms) <cit.> and electromagnetic signal to show the application of the proposed framework. We consider a two-dimensional direction estimation problem. The basic model setup is shown in the Fig. <ref>. There are P transmitting antennas and the position of the p-th transmitting antenna is denoted as 𝐫_tx_p. The receiver has Q receiving antennas and the position of the q-th receiving antenna is denoted as 𝐫_rx_q. The distance between two adjacent antennas is d_rx and d_tx for the receiver and transmitter, respectively. The distribution of complex permittivity in space is ℰ, and the permittivity at position 𝐫 is ℰ=ϵ(𝐫). For ease of calculation, we set the shape of the target to be a circle with a radius of 2R. We set m states, each state corresponds to a direction interval. The direction is defined as the angle (the X in Fig. <ref>) between the line connecting the center of the target circle and the center of the receiving antenna array and the vertical line of the antenna array. The direction interval is [-π,π], which is evenly divided into m sub-intervals. The scattered signals E_s are calculated using Maxwell's equations and the method of moments <cit.>. After adding Gaussian random noise to E_s, we estimate signal Y using the MUSIC algorithm. Finally, we use the maximum likelihood algorithm to determine the direction X corresponding to channel embedding Y, and then output the category to which X belongs as the result Ŵ. We first simulated the effect of the distance between the target and the receiver on the direction estimation accuracy. During the simulation, we set the parameters as follows. We set the number of states m=9. The frequency of the electromagnetic signal is 5.0GHz. The distance between the transmitter and the receiver is 8.0m. There are P=1 transmitting antennas and Q=3 receiving antennas. The distance between two adjacent receiving antennas is 0.03m, i.e., d_rx=0.03m. The diameter of the target is 2R=0.2m. The distance between the target and the receiver changes from 0.3m to 5m. The material of the target is water, and the permittivity is given by empirical formula <cit.>. In order to solve the scattered waves E_s using the moment method, we discretized the space so that each subunit is a square with a side length of 0.01m. We estimate the mutual information using a numerical algorithm <cit.>. The results are shown in Fig. <ref>. The results show that when the target is too close to the receiver, the accuracy of the direction estimation is very poor. We believe this is because the existence of phenomena such as diffraction makes it difficult to use the ray tracing model (the basic assumption of the MUSIC algorithm) to equivalent signal transmission <cit.>. When the distance is too large, the accuracy will also decrease. We believe this is because the scattered wave signal becomes weaker, resulting in a decrease in angular resolution. In addition, the changing trend of accuracy is basically consistent with the changing trend of the error lower bound given by our DTMI, and their Pearson correlation coefficient exceeds 0.95. (2) Device type identification based on traffic characteristics. Security and privacy issues have always been a hot topic among researchers <cit.>. In recent years, with the development of the Internet of Things (IoT) and WiFi technology, attackers have devised more diverse means to steal private information. For instance, many attackers place concealed cameras and other IoT devices designed to pilfer private information in public environments such as hotels. After acquiring this private information, these devices continuously transmit the data through gateways. To detect illegal devices, Yan et al. <cit.> leveraged the characteristic that different devices generate distinct traffic patterns, using the traffic at the gateway for device type identification. Their research findings indicated a minimum accuracy rate of 99.17% for identifying common devices like various models of Xiaomi phones, routers, etc. In this paper, based on their open-source code and data, our analysis shows that lossless detection can be achieved when the bit rate satisfies the sufficient condition given in Theorem <ref>. At this moment, the schematic diagram illustrating the sensing channel encoder model is depicted in Fig. <ref>. Post-processing of the traffic data, we employ the methodology put forth by Yan. <cit.> and colleagues to derive a 30-dimensional signal intended for appliance classification. Our dataset encompasses traffic information from eleven distinct device categories, whose precise nomenclature and coding are presented in Table <ref>. Notably, instances where identical device names are associated with multiple codes signify the existence of several units of the same device category. As an illustration, Type “C" comprises two devices, labeled “C1" and “C2", which denote two separate models of Xiaomi induction stoves. The evaluation procedure incorporates a five-fold cross-validation strategy, alongside adopting the KNN classifier as the analytical tool for discrimination. Throughout every iteration of cross-validation, the signals hailing from the subset earmarked for training are denoted as X^n, whereas those belonging to the testing subset are marked as Y^n, precedented by applying algorithm “GaoOhViswanath" <cit.> to gauge mutual information. Fig. <ref> illustrates the results of our calculations. Here, the possible state number m =9, and 30-dimensional features are used for device type recognition. In this case, the corresponding sensing bitrate is R = logm/n. We find that the data at this time satisfies the sufficient conditions given by Theorem <ref>, and the goal of non-destructive sensing can be achieved at this time. The results of our KNN classification also show that the accuracy of device type recognition is 100%. § CONCLUSION In this paper, we establish a channel model suitable for ubiquitous sensing, where we associate the sensing task with the received channel embedding through discrete task mutual information. Compared to the sensing mutual information in the integrated sensing and communication system, discrete task mutual information can more accurately evaluate the performance of the sensing system. Unlike traditional communication channel models, in sensing channels, it is difficult to maintain the independent and identically distributed characteristics among different random variables. For discrete task sensing channels, we provide upper and lower bounds for the expected error of sensing based on discrete task mutual information, and give a sufficient condition for achieving lossless sensing. We conduct case studies on four common sensing applications based on experimental data and simulation data. The results show that discrete task mutual information has a strong similarity with sensing accuracy. This provides a theoretical evaluation method for the performance of integrated sensing and communication systems beyond experimental evaluation. unsrtnat
http://arxiv.org/abs/2405.09778v1
20240516024644
Beam Pattern Modulation Embedded Hybrid Transceiver Optimization for Integrated Sensing and Communication
[ "Boxun Liu", "Shijian Gao", "Zonghui Yang", "Xiang Cheng", "Liuqing Yang" ]
eess.SP
[ "eess.SP" ]
propositionProposition lemmaLemma Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Beam Pattern Modulation Embedded Hybrid Transceiver Optimization for Integrated Sensing and Communication Boxun Liu, Graduate Student Member, IEEE, Shijian Gao, Member, IEEE, Zonghui Yang, Graduate Student Member, IEEE, Xiang Cheng, Fellow, IEEE, Liuqing Yang, Fellow, IEEE Part of this paper has been accepted for presentation at the 2024-Spring IEEE Vehicular Technology Conference (VTC2024-spring, Singapore) <cit.>. May 20, 2024 =============================================================================================================================================================================================================================================================================================================================== Integrated sensing and communication (ISAC) emerges as a promising technology for B5G/6G, particularly in the millimeter-wave (mmWave) band. However, the widely utilized hybrid architecture in mmWave systems compromises multiplexing gain due to the constraints of limited radio frequency chains. Moreover, additional sensing functionalities exacerbate the impairment of spectrum efficiency (SE). In this paper, we present an optimized beam pattern modulation-embedded ISAC (BPM-ISAC) transceiver design, which spares one RF chain for sensing and the others for communication. To compensate for the reduced SE, index modulation across communication beams is applied. We formulate an optimization problem aimed at minimizing the mean squared error (MSE) of the sensing beampattern, subject to a symbol MSE constraint. This problem is then solved by sequentially optimizing the analog and digital parts. Both the multi-aperture structure (MAS) and the multi-beam structure (MBS) are considered for the design of the analog part. We conduct theoretical analysis on the asymptotic pairwise error probability (APEP) and the Cramér-Rao bound (CRB) of direction of arrival (DoA) estimation. Numerical simulations validate the overall enhanced ISAC performance over existing alternatives. Integrated sensing and communications (ISAC), millimeter wave, hybrid transceivers, beam pattern modulation § INTRODUCTION Integrated sensing and communications (ISAC) <cit.> is a pivotal technology for B5G/6G, striving for symbiosis and mutual enhancement of communication and sensing with sharing resources such as spectrum, hardware, and energy. Recently, millimeter-wave (mmWave) ISAC has gained substantial attention due to its broader bandwidth, facilitating higher data rates and improved detection accuracy for both communication and sensing. Moreover, sensing and communication share similar channel characteristics and signal processing techniques in the mmWave frequency band <cit.>, further enabling their seamless integration. Transceiver design is vital for mmWave ISAC system, aiming to realize better performance trade-offs between communication and sensing. A large proportion of ISAC transceivers <cit.> rely on fully digital (FD) architectures, making them impractical to deploy in mmWave ISAC massive MIMO systems due to high hardware costs and power consumption. To address this issue, some studies have explored low-cost hybrid architectures for mmWave ISAC transceiver design <cit.>, where the number of RF chains is fewer than the number of antennas. In <cit.>, a fully-connected hybrid transceiver design was proposed for the single-user mmWave ISAC scenario by approximating the optimal communication and radar precoder. To further lower the hardware cost, the partially-connected hybrid transceiver architecture <cit.> has been adopted for enabling multi-user ISAC, which minimizes the Cramér-Rao bound (CRB) for direction of arrival (DoA) estimation under communication constraints. However, the spectral efficiency (SE) of hybrid ISAC systems is impaired due to two factors. On the one hand, the restricted number of RF chains damages the potential multiplexing gain (MG). On the other hand, additional sensing functions will consume system resources, inevitably causing a further decrease in SE. To achieve higher SE, index modulation (IM) has emerged as a promising technology for delivering additional information by selectively activating the state of certain resource domains <cit.>, such as antennas and subcarriers. Recently, some sensing-centric ISAC transceiver designs have been proposed in conjunction with IM to improve SE <cit.>. In <cit.>, a multi-carrier agile joint radar communication (MAJoRCom) system was proposed based on carrier agile phased array radar (CAESAR), where communication bits are transmitted through selective activation of radar waveforms on subcarriers and antennas. Furthermore, a hybrid index modulation (HIM) scheme was proposed <cit.> for frequency hopping MIMO radar communications system, where communication bits are transmitted through index modulation on entwined frequency, phase, and antenna tuples. While radar functionality remains unaffected in <cit.>, it results in a significantly low communication rate. In <cit.>, a spatial modulation-based communication-radar (SpaCoR) system was proposed, where individual sensing and communication waveforms are transmitted on different antennas, and generalized spatial modulation (GSM) is adopted to embed additional data bits through antenna selection. Nevertheless, the data rates are limited by the radar pulse period. In addition, these designs rely on antenna activation-based index modulation and are exclusively designed for FD architecture, limiting direct application to hybrid systems. To better cope with the hybrid structures, generalized beamspace modulation (GBM) was proposed in <cit.>, which utilizes the unique sparsity of mmWave beamspace channel to elevate SE by implementing IM over beamspace. However, it is designed for mmWave communication-only systems without sensing capabilities. Built upon GBM, recent works <cit.> introduce similar IM schemes into mmWave ISAC systems to attain higher SE. In <cit.>, the dual-functional beam pattern is selectively activated in a non-uniform manner for a higher SE. Nonetheless, the additional sensing capability of beam patterns inevitably compromises the communication performance. In <cit.>, the ISAC transmitter selectively activates partial spatial paths for communication and employs a single fixed beam for sensing, termed SPIM-ISAC. The subsequent work <cit.> delves further into the consideration of the beam squint effect in the terahertz frequency band. In fact, SPIM is a special case of GBM, which constructs beamspace through fixed strongest channel paths without beam optimization. The design of separate communication and sensing beams enhances communication SE while ensuring sensing performance. However, as it extends to multi-angle scanning and non-line-of-sight (NLoS) scenarios, sensing beams introduce potential disturbance to communication receivers due to the randomness of targets' angles, thereby deteriorating the error performance. Additionally, SPIM-ISAC achieves a performance trade-off solely through power allocation between optimal communications-only and sensing-only beamformers, lacking a comprehensive consideration of overall performance. Moreover, the assumption of infinite precision for phase shifters is impractical to achieve. Considering the limitations highlighted in the previously mentioned works, we have developed a communication-centric mmWave ISAC transceiver design, where one dedicated RF chain is reserved for sensing. To address the decrease in SE resulting from the reduced number of RF chains allocated for communication, we have extended GBM to beam pattern modulation (BPM) for communication beams. Compared to GBM based on the ideal beamspace domain and SPIM based on the channel path domain, BPM considers a more generalized beam pattern concept, where each beam pattern is formed by the corresponding column of analog precoders. Nevertheless, more flexible beam pattern and additional sensing requirements increase the complexity of the transceiver optimization. In light of the sensing interference on the communication receiver, we formulate a joint optimization problem to minimize the sensing beampattern mean squared error (MSE) under the symbol MSE constraint. We solve it by optimizing analog and digital parts sequentially, where both the multi-aperture structure (MAS) and the multi-beam structure (MBS) are considered for analog part optimization. For MBS, a low-complexity 2-step beam selection algorithm based on the min-MSE criterion is proposed. For MAS, we adopt the branch and bound algorithm for analog sensing precoder design and the entry-wise iteration algorithm for analog communication parts design. With the fixed analog part, the digital part is optimized using the proposed alternating optimization algorithm for improved power allocation. Moreover, the communication and sensing performance are theoretically analyzed. The contributions of our work can be summarized as follows. * We propose a beam pattern modulation-embedded hybrid transceiver design for mmWave ISAC systems (BPM-ISAC), where the ISAC transmitter provides multi beams for single-user communication and scanning beams for sensing, respectively. The communication beams are selectively activated to enhance SE. * We formulate an optimization problem to minimize sensing beampattern with symbol MSE constraint and solve it by optimizing analog and digital parts in turn. Two typical hybrid structures, namely MBS and MAS, are considered for the analog part design. * We conduct a theoretical analysis of the complexity and convergence of the proposed algorithm. The asymptotic pairwise error probability (APEP) and the CRB are derived to illustrate the bit error and DoA estimation performance. Additionally, simulation results validate the proposed method's advantages in ISAC. The rest of this paper is organized as follows: Section 2 introduces the system and signal model of the proposed BPM-ISAC system. Section 3 formulates an optimization problem and Section 4 proposes a joint hybrid transceiver design to solve it. Then Section 5 gives the performance analysis and Section 6 provides numerical simulations. Finally, Section 7 concludes this paper. Notation: (·)^ T, (·)^ H, (·)^†, ‖·‖_2, and ‖·‖_F denote the transpose, the conjugate transpose, pseudo-inverse, 2 norm and Frobenius norm, respectively. a[i] is the ith element of a vector a and A[i,j] denotes the element of matrix A at the ith row and the jth column. ȧ(ψ)= ∂a(ψ) /∂ψ means the derivative of vector a over ψ. 𝒞𝒩(m,σ^2) represents the complex Gaussian distribution whose mean is m and covariance is σ^2. F_N denotes the Ndimensional discrete Fourier transform (DFT) matrix. I_K is the K× K identity matrix and 1_K denotes the K× 1 all-one column vector. diag(a) means diagonal matrix formed from vector a and 𝔼(·) means expectation operation. ℝ and ℂ denote the set of real numbers and complex numbers, respectively. § SYSTEM MODEL As shown in Fig. <ref>, we consider a mmWave ISAC system for point-to-point communication and multi-target detection, which consists of an ISAC base station (BS) and a communication receiver. The ISAC BS comprises an ISAC transmitter and an echo receiver for simultaneous communication and monostatic sensing. In our modeling, the communication receiver and sensing targets are assumed to be spatially distinct. Both the ISAC transmitter, the echo receiver, and the communication receiver adopt fully-connected hybrid architecture, equipped with N_ t, N_ e, and N_ r half-wavelength spaced uniform linear antenna array, respectively. §.§ Beam Pattern Modulation for ISAC At the ISAC transmitter, K communication beams and W sensing scanning beams are generated through corresponding digital and analog precoders, i.e., s=F_ CP_ Cx̅_ C+F_ SP_ Sx̅_ S, where x̅_ C∈ℂ^K × 1 and x̅_ S∈ℂ^W × 1 are the mapped communication and sensing symbols, respectively. F_ C∈ℂ^N_ t× K and F_ S∈ℂ^N_ t× W are analog precoders for communication and sensing, respectively. P_ C=diag(p)∈ℝ^K × K and P_ S=diag(b)∈ℝ^W × W are the corresponding digital precoders, where each element of p and b represents the power allocation on the associated beam. §.§.§ Communication For communication, IM is implemented on the beam pattern domain. Specifically, in each symbol period, only N_ C out of K communication beams are activated. To realize the selective activation, N_ C-dimensional non-zero data stream x_ C∈ℂ^N_ C× 1 is mapped to K-dimensional zero-containing x̅_ C with totally C_K^N_ C possible index patterns. Similar to <cit.>, 2^⌊ log_2 C_K^N_ C⌋ of these patterns are utilized to transmit additional ⌊ log_2 C_K^N_ C⌋ index bits. Suppose M-ary phase shift keying/quadrature amplitude modulation (PSK/QAM) is adopted for communication, and the achievable SE is given by η=N_ Clog_2M+⌊ log_2 C_K^N_ C⌋ bps/Hz. §.§.§ Sensing For sensing, to save RF chains, each of the W beams is sequentially activated to scan W directions of interest. Therefore, sensing signal x_ S is mapped to a W-dimensional one-hot vector x̅_ S∈ℂ^W × 1 before transmission. For more flexible sensing, the case of non-equal probability scanning is considered. Denote the activation probability matrix as D=diag([d_1,...,d_W]), where d_i represents the predefined activation probability of the ith sensing beam and satisfies ∑_i=1^Wd_i=1. For the hardware implementation, as shown in Fig. <ref>, only N_ C and one RF chains are deployed at the ISAC transmitter for communication and sensing, respectively. The switching network is controlled to adjust the RF chain connection for non-zeros symbols transmission. In addition, we assume K and W RF chains are employed for the communication receiver and echo receiver, respectively. §.§ I-O Relationship of Communication The classical Saleh-Valenzuela mmWave channel <cit.> with P dominant paths is adopted as H=√(N_ tN_ r/P)∑_i = 1^Pα_ia_N_ r(θ_i)a_N_ t^ H(ϕ_i), where α_i is the complex gain of the ith path. a_N_ r(θ_i) and a_N_ t(ϕ_i) are the channel steering vectors of the ith path, where a_N(θ)[i]=1/√(N_ t)e^-jπ(i-1) sin(θ). We assume that H is available at the transmitter. In the time division duplex (TDD) system, this can be achieved via advanced channel estimation schemes <cit.> in the uplink, while in the frequency division duplex (FDD), this can resort to downlink estimation accompanied by dedicated feedback strategies <cit.>. The received signal processed by analog combiner W_ RF∈ℂ^N_ r× K becomes y_C = W^ H_ RFHF_ CP_ Cx̅_ C+W^ H_ RFHF_ SP_ Sx̅_ S+ξ_ C = H_ CP_ Cx̅_ C+H_ SP_ Sx̅_ S+ξ_ C, where H_ C and H_ S is the equivalent digital channel (EDC) for communication and sensing, and ξ_ C∼𝒞𝒩(0,σ^2 I_K) is additive white Gaussian noise (AWGN). It is noteworthy that the communication received signal is disturbed by sensing signal and noise. There exists a trade-off between sensing and communication, wherein higher sensing power will increase the symbol error rate. To eliminate the sensing interference, one possible method is to estimate the sensing signal and then subtract it at the communication receiver <cit.>. However, this scheme assumes that the instantaneous sensing signal is known to the user, limiting the freedom degree of sensing waveform and increasing the operational complexity. In this paper, we suppose that only second-order statistics of the sensing signal are known at the communication receiver. Therefore, the well-known LMMSE equalizer is adopted as W_ BB= R_x̅_ CP_ CH_ C^ H(H_ CP_ CR_x̅_ CP_ CH_ C^ H+. .H_ SP_ SR_x̅_ SP_ SH_ S^ H+σ^2I_K)^-1, where R_x̅_ C=𝔼[x̅_ Cx̅_ C^ H]=N_ C/KI_K and R_x̅_ S=𝔼[x̅_ Sx̅_ S^ H]=D. Then the symbol x̅_ C is estimated as x̃_C=W_ BB(H_ CP_ Cx̅_ C+H_ SP_ Sx̅_ S+ξ_ C). The information bits contained in x_ C and index bits can be estimated by the maximum likelihood (ML) detector or 2-step quantization detector <cit.>, so the details are omitted here. §.§ I-O Relationship of Sensing Suppose there are N pointed targets with ith one locating at angles ψ_i. The ISAC transmitter sequentially transmits W sensing beams to cover the range of interest. Then the echoes are received to estimate the target parameters. Considering quasi-static sensing processes, it is equivalent to simultaneously scanning and receiving echoes from various directions. We assume that the self-interference on the echo receiver from the transmitter can be effectively mitigated <cit.>. During per scanning, the received echo signal is approximated as y_ R =∑_i=1^Nβ_i a_N_ e(ψ_i)a_N_ t^ H(ψ_i)(F_ SP_ Sx̅_ S+F_ CP_ Cx̅_ C)+ξ_ R (a)≃∑_i=1^Nβ_ia_N_ e(ψ_i)a_N_ t^ H(ψ_i)F_ SP_ Sx̅_ S+ξ_ R, where β_i denotes the ith reflection coefficient of the target and ξ_ R∈ℂ^N_ e× 1 is the additive white Gaussian noise. (a) is for that the communication beam will be selected to stay away from the target direction to avoid sensing interference, and a_N_ t^ H(ψ_i)F_ CP_ Cx̅_ C≃ 0 is satisfied. For cases where the sensing targets are close to the communication receiver, sensing can be achieved simply using the communication beam without additional optimization, which is not within the scope of our study. Denote Ξ=diag(β_1, ..., β_N) and A_M=[a_M(ψ_1),...,a_M(ψ_N)], and we can obtain the compact form as y_ R≃A_N_ eΞA_N_ t^ HF_ SP_ Sx̅_ S+ξ_ R. For simplicity, we assume that N_ t=N_ r and denote A=A_N_ t=A_N_ e. Since the direction of departure (DoD) and DoA of the target are the same, the sensing analog combiner W_ E can be implemented as F_ S. Then the sensing baseband received signal is derived as y_ B =F_ S^ HAΞA^ HF_ SP_ Sx̅_ S+F_ S^ Hξ_ R =T_ B^ HΞT_ BP_ Sx̅_ S+ξ_ B, where T_ B=A^ HF_ S and ξ_ B∼𝒞𝒩(0,R_ B). The target parameters can be estimated with existing algorithms <cit.>. § PROBLEM FORMULATION FOR BPM-ISAC In this section, we establish the performance criterion of communication and sensing, and formulate a joint optimization problem to achieve the desired sensing beampattern with reliable communication via optimizing the hybrid transceivers. §.§ Communication Performance Criterion The communication performance includes effectiveness and reliability, which are characterized by SE and symbol error, respectively. Since the SE of the proposed method is determined as Eq. (<ref>), we resort to the symbol MSE under fixed SE as the performance metric to characterize the transmission reliability. According to Eq. (<ref>), the symbol MSE of x̅_ C, 𝔼(‖x̃_C-x̅_ C‖_2^2), is derived as MSE_C(H_ C,H_ S,P_ C,P_ S) = N_ C/K‖W_ BBH_ CP_ C-I_K‖_F^2 +‖D^1/2W_ BBH_ SP_ S‖_F^2 +σ^2‖W_ BB‖_F^2. As can be seen, the MSE stems from three factors: symbol estimation residual, sensing interference, and noise. Since the sensing interference intensity may vary significantly with different channel realizations, it is difficult to determine a fixed MSE threshold remaining proper under all channel conditions. Therefore, we introduce a relative MSE threshold related to H_ C and H_ S as follows: Assuming no processing is applied on the digital part, i.e., P_ S=diag(t) and P_ C=I_K, the corresponding digital combiner becomes W_ BB,0=N_ C/KH_ C^ H (N_ C/KH_ CH_ C^ H.+ H_ S(diag(t))^2DH_ S^ H+σ^2I_K.)^-1. The relative symbol MSE threshold is defined as Γ(H_ C,H_ S,μ)= N_ C/K‖W_ BB,0H_ C-I_K‖_F^2+ μ‖D^1/2W_ BB,0 H_ Sdiag(t)‖_F^2+σ^2‖W_ BB,0‖_F^2, where 0 ≤μ≤ 1 is the weighting coefficient, signifying the relative tolerance for sensing interference cancellation errors. As μ increases, a lighter emphasis is placed on the communication side, resulting in improved sensing performance. §.§ Sensing Performance Criterion For accurate parameter estimation, radiating sufficient energy in the directions of interest is crucial. Hence, the beampattern is adopted as the sensing performance metric, measuring the sensing power distribution in different directions. Denote θ_t as the tth direction of interest, then the actual sensing beampattern is defined as v=[| b_1 a_N_ t^ H(θ_1)F_ S[:,1]|,..,| b_W a_N_ t^ H(θ_W)F_ S[:,W]|]^T, where b_i is the ith element of b and represents the allocated power on ith sensing beam. To augment sensing performance, the actual sensing beampattern needs to maximally match the ideal beampattern. We predefine the ideal beampattern as t∈ℝ^W×1, which satisfies the power constraint ‖D^1/2t‖_2^2=∑_i=1^Wd_it_i^2=T_ R, where t_i is the ith element of t and T_ R is the average sensing power. The sensing beampattern MSE <cit.> is adopted as the criterion of sensing performance, which measures the discrepancy between sensing beampattern v and ideal beampattern t. Considering the activation probability of each beam, it is derived as MSE_S(P_ S,F_ S)=‖D^1/2(v-t)‖_2^2=∑_i=1^Wd_i(v_i-t_i)^2. §.§ Problem Formulation To minimize sensing beampattern MSE while adhering to constraints on symbol MSE, transmit power, as well as the analog precoder, a joint optimization problem of hybrid transceivers is formulated as min_P_ C,P_ S F_ C,F_ S,W_ RF MSE_S(P_ S,F_ S) s.t. MSE_C(H_ C,H_ S,P_ C,P_ S) ≤Γ(H_ C,H_ S,μ), ‖P_ C‖_F^2≤ K, ‖D^1/2b‖_F^2≤ T_ R, F_ C∈ℱ, F_ S∈ℱ, W_ RF∈𝒲. The expressions of χ and Γ(μ) are provided in Eqs. (<ref>) and (<ref>), respectively. (<ref>) represents MSE constraint on communication symbol estimation, (<ref>) corresponds to the communication power constraint, and (<ref>) denotes the average sensing power constraint. In Eqs. (<ref>)-(<ref>), ℱ and 𝒲 represent the feasible analog precoder sets, corresponding to two representative analog configurations, i.e., MBS and MAS. For MBS, lens array antennas are employed and each column of analog precoders is selected from N-dimensional DFT codebook ℱ_N={F_N[:,1],...,F_N[:,N]}, where N takes on N_ t or N_ r. For MAS, the fully connected B-bit phase shifter network is adopted, and the adjustable angles of each element of analog precoders are selected from ℬ={0,2π/2^B,...,2π(2^B-1)/2^B}. For convenience, the notations of all optimized variables are summarized in Table <ref>. § HYBRID TRANSCEIVER DESIGN FOR BPM-ISAC In this section, we propose an efficient transceiver design to address the optimization problem formulated above. Due to the coupling of all five variables in the non-convex constraint (<ref>) and the discrete nature of the feasible analog precoder set, the original problem is a complex mixed-integer non-convex large-scale combinatorial optimization problem that is difficult to solve. As a result, we address it by optimizing analog and digital parts sequentially. Additionally, a beamspace MUSIC-based DoA estimation algorithm is introduced. §.§ Analog-part Optimization for BPM-ISAC Considering that the analog part forms the EDC and plays a fundamental role in digital part design <cit.>, we first optimize the analog part with the unoptimized digital part. Then the symbol MSE is converted to the function of H_ S and H_ C, i.e., MSE_ C(H_ C,H_ S)= N_ C/K‖W_ BB,0H_ C-I_K‖_F^2+ ‖D^1/2W_ BB,0 H_ Sdiag(t)‖_F^2+σ^2‖W_ BB,0‖_F^2. It is worth noting that MSE_ C(H_ C,H_ S) is the upper bound of the relative symbol MSE threshold, which measures the communication performance of EDC. Firstly, the analog sensing precoder F_ S is optimized to point in the direction of interest by solving the following problem. [box=]align 𝒫.11:  Optimization for analog sensing precoder min_F_S MSE_S(F_S) s.t. F_S∈ℱ. Then F_ C and W_ RF are jointly optimized with fixed F_ S to minimize symbol MSE as follows: [box=]align 𝒫.12:  Optimization for communication's analog part min_F_C,W_RFMSE_C(H_C,H_S) s.t. F_C∈ℱ, W_RF∈𝒲. Below, 𝒫.11 and 𝒫.12 are solved with the configuration of MBS and MAS, respectively. §.§.§ Analog-part for MBS Firstly, the following proposition illustrates that 𝒫.11 can be transformed into a series of parallel optimization problems. Solving 𝒫.11 is equivalent to optimizing each column of F_ S individually as follows: max_F_ S[:,l]|a_N_ t^ H(θ_l)F_ S[:,l]| s.t. F_ S[:,l]∈ℱ_N_ t. With unoptimized digital part, i.e., P_ S=diag(t), the objective function can be rewritten as MSE_S=∑_l=1^Wd_lt_l(|a_N_ t^ H(θ_l)F_ S[:,l]|-1)^2. Considering that 0≤|a_N_ t^ H(θ_l)F_ S[:,l]|≤ 1, minimizing Eq. (<ref>) is equivalent to maximizing |a_N_ t^ H(θ_l)F_ S[:,l]| parallelly. The optimal solution is obtained by exhaustive search and the computational complexity is 𝒪(WN_ t), which is obviously acceptable. Considering the sensing beam set as Ω={F_ S[:,1],...,F_ S[:,W]}, communication transmitting beams should be selected from set difference {ℱ_N_ t\Ω} to avoid sensing interference on the communication receiver. For 𝒫.12, the optimization problem is reformulated as min_F_ C,W_ RF MSE_ C(H_ C,H_ S) s.t. F_ C[:,i]∈{ℱ_N_ t\Ω}, W_ RF[:,i]∈ℱ_N_ r. Similar to <cit.>, H_ C and H_ S are the submatrices of beamspace channel H̅=F_N_ r^HHF_N_ t. Specifically, the indices of the selected sub-columns and sub-rows indicate the selected DFT codewords for transmitted and received beams respectively. Then the above problem can be regarded as determining a K× K submatrix of H̅. Considering the exponential time complexity of the exhaustive search, we proposed a two-stage alternative to lower complexity. Firstly, a set of L largest elements in H̅ are selected as candidate beam pairs. Secondly, the final columns and rows are chosen within the scope of these L candidate beam pairs using the min-MSE criterion. Then the overall computational complexity is reduced from 𝒪(C_N_ r^K C_N_ t-W^K) to 𝒪(N_r^2(N_t-W)^2+C_L^K). §.§.§ Analog-part for MAS Compared to MBS, MAS has higher degrees of freedom for more flexible beam patterns and better EDC. Below, we resort to the MAS with B-bit PSs for analog part design. In this case, 𝒫.11 and 𝒫.12 are integer programming problems, and the optimal solution can be solved by brutal search. However, it is impractical due to the exponential growth of time complexity with both the number of bits and antennas. Thus low-complexity methods are proposed for these two problems. For 𝒫.11, according to proposition <ref>, it is equivalent to solving the following problems parallelly. max_F_ S[:,l]|a_N_ t^ H(θ_l)F_ S[:,l]| s.t. F_ S[i,l]∈e^jℬ/√(N_ t), ∀ i. Since the definition domain of the variable is finite, it can be solved by the widely-used branch and bound (B&B) algorithm <cit.>, which adopts tree search strategy but applies pruning rules to skip suboptimal regions of the tree. The tree has N_ t+1 levels, and the nth level branch represents the value of nth PSs, which has 2^B child nodes. We perform a breadth-first search with N_ t iterations, during which it maintains the currently best available solution q^*, global lower bound ℒ and available set 𝒢. The overall algorithm is summarized in Algorithm 1. At initialization, 𝒢 is consist of the root note v_0, q^* is randomly set as q_0 and ℒ is initialized as ℒ_0=|a_N_ t^ H(θ_l)q_0|. In the nth iteration, for each new node, its upper bound and lower bound are calculated for updates. For each node v, denote the first n elements of F_ S[:,l] as q_ L∈ℂ^n× 1 and the remained N_ t-n elements as q_ R∈ℂ^(N_ t-n)× 1. The objective function can be rewritten as f=|a_N_ t^ H(θ_l)[1:n]q_ L+a_N_ t^ H(θ_l)[n+1:N_ t]q_ R|. According to triangle inequality, the upper bound can be achieved as f_ UB=|a_N_ t^ H(θ_l)[1:n]q_ L|+N_ t-n with q_ R=q_ R,UB. The ith phase of q_ R,UB satisfies ∠q_ R,UB[i]=∠(a_N_ t^ H(θ_l)[1:n]q_ L)+∠(a_N_ t(θ_l)[i]), where ∠(x) represents the phase of x. If f_ UB is lower than the current global lower bound ℒ, all leaves below node v are suboptimal and will be pruned. Otherwise, feasible q_ R will be obtained as q_ R, LB by quantifying q_ R,UB nearby with given B bit resolution, i.e., q_ R,LB=min_q_R[i]∈e^jℬ/√(N_ t),∀ i‖q_ R-q_ R,UB‖_2. Meanwhile, the lower bound can be obtained as f_ LB=|a_N_ t^ H(θ_l)[1:n]q_ L+a_N_ t^ H(θ_l)[n+1:N_ t]q_ R,LB|. If lower bound f_ LB is larger than ℒ, ℒ will be updated as f_ LB and q^* should be updated as [q_ L^ T, q_ R,LB^ T]^ T. It is noteworthy that the solution of B&B algorithm is optimal with reduced computation time in comparison with the exhaustive search. For 𝒫.12, it has been shown <cit.> that entry-wise iteration can be a low-complexity effective method for finite resolution PSs case. In detail, for columns z from 1 to K, the F_ C[:,z] and W_ RF[:,z] are optimized successively. For each column, each entry is optimized to minimize objective function while keeping the others fixed until convergence. For example, F_ C[i,z] can be updated by solving min_F_ C[i,z] MSE_ C s.t. F_ C[i,z]∈e^jℬ/√(N_ t). Considering that appropriate initialization can improve the performance of the proposed iteration algorithm <cit.>, an improved initializer is proposed below, especially for the low-bit case. Firstly, the following proposition illustrates the transformation of the initial problem. As the SNR increases, 𝒫.12 with B-bit PSs tends to be asymptotically equivalent to the following mixed-integer semi-definite programming (MISDP) problem: min_w,F_ C,W_ RF w s.t. Z= [ w/K+WI_K+W G; G^ H J ]≽ 0, F_ C[i,j]∈e^jℬ/√(N_ t),∀ i,j, W_ RF[i,j]∈e^jℬ/√(N_ t),∀ i,j, where G= [ diag(t)D^1/2F_ SH^ HW_ RF; σI_K ], and J=W_ RF^ HHF_ CF_ C^ HH^ HW_ RF. See Appendix <ref>. Considering that F_ C and W_ RF are coupled in the nonlinear constraint (<ref>), we solve the problem by alternatively optimizing F_ C and W_ RF until convergence. Below, we take F_ C as an example to illustrate how to transform the constraint (<ref>) into a linear matrix inequality (LMI) constraint. Denote a=π/2^B-1[-(2^B-1),...,(2^B-1)]^ T, c=cos(a) and s=sin(a). A series of binary vectors x^i,j∈ℂ^(2^B+1-1)×1 and y^i,j,i^',j^'∈ℂ^(2^B+1-1)×1 are introduced, where x^i,j[t] indicates whether ∠F_ C[i,j] is a[t] and y^i,j,i^',j^'[t] indicates whether ∠F_ C[i,j]-∠F_ C[i^',j^'] is a[t]. The problem in proposition <ref> with fixed W_ RF can be equivalently transformed as min_w,x^i,j,y^i,j,i^',j^' w s.t. (<ref>), ‖x^i,j‖_1=1, e^ Tx^i,j=0, a^ T(x^i,j-x^i^',j^')=a^ Ty^i,j,i^',j^', where e∈ℂ^(2^B+1-1)×1 is a constant vector whose first 2^B-1 elements are 1 and others are 0. J is the linear function of y^i,j,i^',j^'. See Appendix <ref>. Similarly, for the solution of W_ RF, we can transform G and J to the linear function of x^i,j and y^i,j,i^',j^'. The above problem is a MISDP problem with linear constraints and can be solved using the outer approximation method, which can seek existing optimization toolboxes such as YALMIP <cit.>. The algorithm is summarized as Algorithm 2. §.§ Digital-part Optimization for BPM-ISAC With the fixed analog part, the original optimization problem in Section <ref> is transformed as follows: [box=]align 𝒫.2:  Optimization for communication and sensing's digital part min_P_C,P_S MSE_S(P_S) s.t. (<ref>)-(<ref>). Since P_ S, P_ C and W_ BB are coupled in Eq. (<ref>) in a non-convex manner, 𝒫.2 is still a non-convex problem. Thus an alternating optimization algorithm is proposed to alternatively update P_ S, P_ C and W_ BB until convergence. For initialization, we set P_ S, P_ C, and W_ BB as diag(t), I_K, and W_ BB,0, respectively. For each iteration, the following three steps are executed in order. 1) Update P_ S with fixed P_ C and W_ BB by solving min_b MSE_S(P_ S) s.t. (<ref>) (<ref>). Since both the objective function and constraints are quadratic, the above problem is a convex quadratically constrained quadratic program (QCQP) problem, which can be solved via convex optimization toolbox <cit.>. 2) Update P_ C with fixed P_ S and W_ BB to minimize communication symbol estimation MSE by solving min_P_ C MSE_C(P_ C) s.t. (<ref>). Since the definition domain of P_ C is a convex set, and the objective function and the constraint are convex, the above problem is also a convex QCQP problem. 3) Update W_ BB with fixed P_ S and P_ C according to Eq. (<ref>). The overall alternating algorithm is presented in Algorithm <ref>. §.§ Beamspace MUSIC-based DoA estimation For the echo receiver with hybrid architectures, only the baseband received signal can be obtained. Therefore, we resort to beamspace MUSIC algorithm <cit.> for DoA estimation as below. Assume no significant angle change takes place within the sensing coherent time. During each scan, the received echo signal after analog combing is given at Eq. (<ref>). Due to finite sampling times, it is impractical to obtain perfect statistical covariance matrix R_ B=𝔼(y_ By_ B^ H). Instead, the following sample covariance matrix is employed as follows: R̂_ B=1/L∑_l=1^Ly_ B^l(y_ B^l)^ H, where y_ B^l is the baseband received echo signal of the lth sample. Then the covariance matrix is decomposed to signal subspace and noise subspace as R̂_ B=E_ sΛ_ sE_ s^ H+E_ nΛ_ nE_ n^ H, where Λ_ s∈ℂ^N× N and Λ_ n∈ℂ^(W-N)× (W-N) contain N largest eigenvalues and W-N other eigenvalues, respectively. E_ s∈ℂ^W× N and E_ n∈ℂ^W× (W-N) are the corresponding eigenvectors. The MUSIC spectrum is derived as P_ BMUSIC(θ)=d^ H(θ)d(θ)/d^ H(θ)E_ nE_ n^ Hd(θ), where d(θ)=F_ S^ Ha_N_ e(θ) is the beamspace steering vector. Then the locations of the K largest peaks of the spectrum are obtained as the DoA estimation results. § PERFORMANCE ANALYSIS In this section, the complexity and convergence of the proposed algorithm are theoretically analyzed. The APEP and CRB are derived to illustrate the theoretical performance of communication and sensing. In addition, the number of RF chains for sensing are discussed. §.§ Complexity Analysis The complexity of the analog part design with MBS structures has been derived in Section <ref>. For algorithm 1, the worst-case theoretical complexity of the B&B algorithm is 𝒪(2^BN_t), but the pruning rules can substantially reduce actual solving time. For algorithm 2, the overall complexity includes the initialization process and entry-wise iteration. For initialization, the problem in proposition 2 can be transformed into mix-integer linear programming problems and solved by the branch and bound algorithm, the complexity of which is 𝒪(2^BN_ tK). Since the initialization scheme is only applied to low-bit cases, the complexity is acceptable. The complexity of the entry-wise iteration part is 𝒪(N_ iterK(N_ t+N_ r)2^B), where N_ iter denotes the number of iterations. For algorithm 3, the complexity of solving QCQP problems is 𝒪(N_ iter^'(T^3.5+W^3.5) log(1/ϵ)) <cit.> by the interior-point method given accuracy level ϵ, where N_ iter^' is the number of iteration rounds. §.§ Convergence Analysis Algorithm <ref> has a finite number of operational steps. Algorithm <ref> converges because the objective function is non-increasing and is lower-bounded by 0. For Algorithm <ref>, the convergence and existence of the solution are not obvious and analyzed as below. For the first iteration, when updating b, the constraint (<ref>) is transformed into ‖D^1/2W_ BB,0H_ Sdiag(b)‖_F^2 ≤μ‖D^1/2W_ BB,0H_ Sdiag(t)‖_F^2. Since b^(1)=μt and p^(1)=p^(0) are the feasible solution, the problem must have a solution in the first iteration. Suppose after ith iteration, all constraints are satisfied. During (i+1)th iteration, denote the objective value at step j as ε_j. For step 1), we have ε_1(b^(i+1))≤ε_1(b^(i)), and all constraints except for Eq. (<ref>) are satisfied. For step 2), we have ε_2(b^(i+1))=ε_1(b^(i+1)). The constraint (<ref>) is satisfied since p^(i+1) is optimized to lower the symbol MSE. After step 3), Eq. (<ref>) is satisfied and ε_3(b^(i+1))=ε_2(b^(i+1)). It is worth noting that the constraint (<ref>) is still satisfied since W_ BB^(i+1) is the LMMSE equalizer, which further lowers the symbol MSE. Therefore, after (i+1)th iteration, we have ε_3(b^(i+1)) ≤ε_1(b^(i)), i.e., the objective function is non-increasing and all constraints are satisfied. Recalling that the objective value is lower bounded, the convergence of the proposed alternating optimization is guaranteed. In Fig. <ref>, we set the convergence tolerance as 0.001 and the convergence performance of Algorithm 3 with different μ and N_t is presented. It can be observed that the convergence speed slows down as the value of μ decreases and the number of transmit antennas increases. §.§ APEP Analysis The APEP is derived to illustrate the theoretical BER performance of the proposed scheme. Due to the presence of finite-bit PSs and digital part optimization, obtaining an exact APEP is challenging. For simplicity, we analyze sub-beamspace with MBS and the unoptimized digital part, assuming infinite sensing interference power. Firstly, we explain how the number of effective paths decreases due to the interference of sensing beams. As shown in Fig. <ref>, there are P=7 paths in the original N_ r× N_ t=7×7 beamspace channel and we neglect the off-grid beam leakage. BPM refers to the communication-only version of the proposed approach. For BPM-ISAC, W sensing beams cover M_ R=3 paths, which further cover M_ B=2 received beams. For example, the path H̅(1,2) cannot be used for communication because its received beam will be interfered with by H̅(1,5). Therefore, the communication paths can only be chosen from the rest unaffected (N_ t-W)×(N_ r-M_ B) beam pairs. In this case, the number of effective paths is M_ C=3. In proposition 3, the probability distribution of the number of effective paths is derived. For mmWave channel with P paths and W sensing beams, the probability distribution of the number of effective communication paths is P(M_ C=c)= {[ P_M_ R(0), c=P; ∑_r=1^P-c∑_b=1^rP_M_ R(r)P_M_ B(r,b)P_M_ C(c,r,b),; c=0,...,P-1. ]. where P_M_ R(r)=C_N_ r(N_ t-W)^P-rC_N_ rW^r/C_N_ tN_ r^P, P_M_ B(r,b)= {[ P_M_ B(r-1,b-1)(N_ r-b+1)W/N_ rW-r+1; +P_M_ B(r-1,b)bW-r+1/N_ rW-r+1, o.w.; 0, (r,b)=(1,1) or b=0. ]. P_M_ C(c, r, b)=C_b(N_ t-W)^P-r-cC_(N_ r-r)(N_ t-W)^c/C_N_ r(N_ t-W)^P-r. See Appendix <ref>. In Fig. <ref>, the probability of the number of effective paths M_C≥ c is given. It can be observed that a larger number of transmit antennas and fewer sensing beams will render more effective communication beams. For mmWave channel with P paths and W sensing beams, the pairwise error probability P(x̅_ C→x̂_C) through maximum-likelihood (ML) detection algorithm is derived as Eq. (<ref>). See Appendix <ref>. Then the expression of APEP is derived as P_ APEP= 1/η 2^η∑_x̅_ C∑_x̂_CP(x̅_ C→x̂_C)e(x̅_ C, x̂_C), where P(x̅_ C→x̂_C) is given by proposition <ref> and e(x̅_ C, x̂_C) denotes the number of error bits between x̅_ C and x̂_C. §.§ CRB Analysis To further illustrate the sensing performance of the proposed scheme, the CRB <cit.> of DoA estimation is derived. Employing the Swerling-II model <cit.>, the reflection coefficient β_i is assumed to be constant during each scanning. According to Eq. (<ref>), for L sample times per scanning, the baseband signal Y_ B=[y_ B^1,...,y_ B^L] can be derived as Y_ B=T_ B^ HΞT_ BP_ SX̅_ S+N_ B, where X̅_ S=[x̅_ S^1,...,x̅_ S^L] and x̅_ S^l is the sensing signal of the lth sample. Y_ B obeys complex Gaussian distribution 𝒞𝒩(M_ Y,R_ Y), where M_ Y=T_ B^ HΞT_ BP_ SX̅_ S and R_ Y=R_ B. For the target located in the direction of ψ_i, given the directions of other targets, the CRB of its DoA estimation can be obtained as follows (See <cit.>, Section 8.2.3): CRB(ψ_i) = {- Tr(∂R_ Y^-1/∂ψ_i∂R_ Y/∂ψ_i)+2{ Tr(∂M_ Y^ H/∂ψ_iR_ Y^-1∂M_ Y/∂ψ_i)}}^-1 = {2{ Tr(∂(T_ B^ HΞT_ BP_ SX̅_ S)^ H/∂ψ_iR_ B^-1∂T_ B^ HΞT_ BP_ SX̅_ S/∂ψ_i)}}^-1. Taking the expectation of CRB(ψ_i) with respect to X̅_ S and considering R_x̅_ S=D, the final expression is written as CRB(ψ_i)= 1/2|β_i|^2(Tr(P_ S^ HF_ S^ HȦ_̇i̇^̇ ̇ḢR_ B^-1Ȧ_̇i̇F_ SP_ SD))^-1, where Ȧ_̇i̇=ȧ(ψ_i)a^ T(ψ_i)+a(ψ_i)ȧ^ T(ψ_i). §.§ Extension to Multiple RF Chains for Sensing In the previous modeling, only a single RF chain is dedicatedly spared for sensing. Indeed, the number of RF chains can be extended to W_S, where 1≤ W_S≤ W. In this case, there are W_S out of W beams simultaneously activated, resulting in a total of C_W^W_S patterns. The activation probability matrix D is no longer a diagonal matrix and is determined by the predefined activation probability of each pattern. For instance, when W_S=W, D becomes a matrix filled with 1. By substituting the correct matrix D, the proposed transceiver design can be easily applied to the scenario with multiple RF chains for sensing. It is worth noting that increasing the number of RF chains for sensing can accelerate scanning speed, improving sensing accuracy, especially in high dynamic scenarios. Nevertheless, such an improvement comes at the cost of increased hardware overhead. Therefore, the selection of the number of sensing RF chains should carefully balance the sensing efficiency and hardware cost. § SIMULATIONS In this section, we evaluate the communication and sensing performance of the proposed BPM-ISAC method through numerical simulation. We consider a hybrid mmWave ISAC system, where N_ t=N_ r=N_ e=32 unless otherwise specified. Suppose there are P=8 non-line-of-sight (NLoS) paths with α_i ∼𝒞𝒩(0,1), and θ_i and ϕ_i are uniformly distributed in [-π/2,π/2). For communication, we adopt 4-QAM modulation and set K=4, N_ C=3, and L=20. For sensing, we set W=3 and T_ R=5. Without loss of generality, we assume two targets are located at ψ_1=39^∘ and ψ_2=43^∘ with reflection coefficients of |β_1|=|β_2|=1. The scanning directions of interest is set as [38^∘,44^∘,50^∘]. The ideal beampattern is t=√(T_ R)1_W and the activation probability matrix is D=1/WI_W. For algorithms 2 and 3, convergence tolerance is set as 0.001, and the maximum number of iterations is set as 50. The signal-to-noise ratio (SNR) is defined as E_b/N_0=N_ C/ησ^2. To simplify the representation, `BPM-ISAC-MBS' and `BPM-ISAC-MAS' denote our proposed method with MBS and MAS, respectively. For comparison, some relevant methods and variants are introduced. `SPIM-ISAC' refers to <cit.> which utilizes K strongest spatial paths for communication and `GBM' refers to <cit.>. `P-BPM-ISAC' denotes the plain version of `BPM-ISAC-MBS', which utilizes K beams simultaneously without index modulation. `BPM-ISAC-MBS' with maximum SINR-based beam selection criterion is also presented, i.e., the beam pairs with the largest signal-to-interference-plus-noise-ratio (SINR) are selected, where the SINR of beam pairs (i,j) is defined as SINR[i,j]=|H̅[i,j]|^2/∑_k∈Ω|H̅[i,k]|^2+σ^2. For `EDC-ISAC', fully digital architecture is adopted and eigenvectors corresponding to K largest eigenvalue of the spatial channel are utilized to construct EDC. §.§ Communication Performance In Fig. <ref>, we compare the BER performance of BPM-ISAC-MBS with μ=0.5, its variants, and other schemes. For a fair comparison, all schemes adopt the 4-QAM modulation to keep the same SE as 8 bps/Hz. SPIM-ISAC <cit.> exhibits high BER at high SNR due to severe sensing interference. BPM-ISAC-MBS with max-SINR beam selection criterion performs worse, indicating the advantage of the proposed min-MSE criterion. In high SNR regions, BPM-ISAC demonstrates lower BER than P-BPM-ISAC, highlighting the superiority of beam pattern modulation. In addition, the performance of GBM is provided as a reference, which is the special case of BPM-ISAC-MBS without sensing interference. In Fig. <ref>, the BER performance of BPM-ISAC-MBS with different N_ t and μ is presented. As μ increases, strengthening the communication constraint, the BER performance gradually decreases. The BER performance of N_ t=64 is better than the case of N_ t=32 due to the array gain. The BER performance with the on-grid beamspace channel and unoptimized digital precoder is presented, which is consistent with APEP analysis at high SNR regions. It can be observed that the on-grid case has better BER performance than the normal case at the low SNR region. This is due to that Gaussian noise is the main interference factor at low SNR and the communication beams of the on-grid case have more concentrated energy without beam leakage. At high SNR, sensing interference becomes the main interference and these two cases perform similarly. In Fig. <ref>, the BER performance of BPM-ISAC-MBS and BPM-ISAC-MAS with different bit resolutions are illustrated. For BPM-ISAC-MAS, the BER decreases with the increase in bit number due to the higher freedom degree of the optimized beam pattern. In addition, the BER performance of MISDP-initialized BPM-ISAC-MAS with 1-bit PSs is presented. To reduce computation time, F_ C and W_ RF have been optimized only once alternatively. It is observed that with proper initialization, the BER of 1-bit BPM-ISAC-MAS approaches BPM-ISAC-MBS at high SNR. §.§ Sensing Performance In Fig. <ref>, we present the normalized beampattern of the proposed method at a certain moment when the sensing beam pointing at 38^∘. It can be observed that the strongest beam points in the direction of interest, while multiple other beams are activated for communication. Due to the discrete codewords, there exists a certain deviation from the desired direction for BPM-ISAC-MBS, which can be neglected for massive antennas. Compared with BPM-ISAC-MBS based on beamspace, BPM-ISAC-MAS offers a more flexible beam pattern, enhancing the equality of the equivalent digital channel. In Fig. <ref>, the beampattern MSE versus weighting coefficient μ is presented to illustrate the beampattern performance of the proposed method under different μ values. The beampattern MSE decreases with the increase of μ because the augmented communication constraint compromises the power allocation of sensing beams. In addition, the BPM-ISAC-MBS without communication digital precoder optimization has a higher beampattern MSE. This is because optimized communication power allocation can improve communication performance and implicitly relax the constraint on sensing power. To further validate the sensing performance of the proposed method, root mean square error (RMSE) of DoA estimation versus sensing SNR using beamspace MUSIC algorithm <cit.> is shown in Fig. <ref>. The sensing SNR is defined as the ratio between the T_ R and the noise power of ξ_ R. It can be observed that, at high SNR, there are different gaps between the RMSE of DoA estimation and the ideal CRB defined in Eq. (<ref>). This is due to the varying degrees of suppression of sensing power under different constraints. The performance of DoA estimation is consistent with the beampattern performance, indicating the effectiveness of choosing the beampattern as the sensing performance metric. §.§ Communication and Sensing Trade-off In Fig. <ref>, the communication and sensing trade-off curves between BER and beampattern MSE among different schemes are presented for fair comparison. Within the testing scope, BPM-ISAC consistently outperforms other alternatives. It is notable that for large beampattern MSE, i.e., the sensing power is limited, the EDC-ISAC scheme achieves similar BER performance as BPM-ISAC-MBS with 2-bit PSs. However, as the sensing power increases, the BER performance of EDC-ISAC and SPIM-ISAC sharply deteriorates, whereas the proposed scheme demonstrates significant advantages thanks to effective optimization. BPM-ISAC-MBS with 2-bit PSs demonstrates an advantage over BPM-ISAC-MAS due to the higher degree of freedom of analog precoders. In addition, the performance of BPM-ISAC without digital-part optimization and P-BPM-ISAC are provided to demonstrate the effectiveness of power allocation and beam pattern modulation, respectively. § CONCLUSIONS In this paper, we have proposed a novel beam pattern modulation embedded mmWave ISAC hybrid transceiver design, termed BPM-ISAC. BPM-ISAC aims to retain the SE benefits of primitive beamspace modulation schemes while addressing performance bottlenecks in their extension to ISAC functionalities. To ensure near-optimal performance for BPM-ISAC, we formulated an optimization problem to minimize the sensing beampattern MSE under the symbol MSE constraint and solved it by optimizing analog and digital parts sequentially. Both the MBS and MAS hybrid structures are considered for analog configurations. Theoretical analysis and simulation results verified that the proposed BPM-ISAC offers an overall improved sensing and communication trade-off. § PROOF OF PROPOSITION 1 At high SNR, it can be approximated that W_ BB,0≃H_ C^†=(H_ C^ HH_ C)^-1H_ C^ H. Thus the first item of Eq. (<ref>) can be approximated as N_ C/K‖W_ BB,0H_ C-I_K‖_F^2≃ 0 and the objective function is simplified as MSE_ C≃ Tr((W_ RF^ HHF_ Cdiag(t)^2DF_ SH^ HW_ RF+σ^2I_K) .(W_ RF^ HHF_ CF_ C^ HH^ HW_ RF)^-1). Let MSE_ C= Tr(w/K+TI_K+T). With the Schur complement <cit.>, it can be proved <cit.> that minimizing MSE_ C is equivalent to minimizing w and 𝒫.12 can be reformulated as shown in proposition 1. § PROOF OF PROPOSITION 2 Let F̅=F_ CF_ C^ H. For each element, we have F̅[i,j]= ∑_k=1^KF_ C[i,k]F_ C[j,k] = ∑_k=1^Ke^j(∠F_ C[i,k]-∠F_ C[j,k]) = ∑_k=1^K cos△θ_i,k,j,k+jsin△θ_i,k,j,k = ∑_k=1^K c^ Ty^i,k,j,k+js^ Ty^i,k,j,k, where △θ_i,k,j,k=∠F_ C[i,k]-∠F_ C[j,k]. Thus F_ CF_ C^ H is transformed into the linear function of y^i,j,i^',j^'. § PROOF OF PROPOSITION 3 Let P_M_ R(r) and P_M_ B(r,b) represent the probability that sensing beams cover M_ R=r paths and these paths cover M_ B=b received beams, respectively. Let P_M_ C(c,r,b) represents the probability that M_ C=c paths are available for communication when M_ R=r and M_ B=b. Then the probability distribution of M_ C can be easily obtained as Eq. (<ref>). Both P_M_ R(r) and P_M_ C(c,r,b) belong to the classical probability model and can be derived as Eq. (<ref>) and Eq. (<ref>) using the combination number formula. For P_M_ B(r,b), we can obtain it through a recursive process as Eq. (<ref>). § PROOF OF PROPOSITION 4 We assume that when the number of effective paths M_ C<K, effective communication cannot be achieved and BER is set to 0.5. When M_ C≥ K, denote γ_i=P/N_ tN_ rH_ C^2[i,i] and △ x_i=x̅_ C[i]-x̂_C[i], and then the pairwise error probability is given as P(x̅_ C→x̂_C) = ∑_c=K^P 𝔼_M_ C=c{Q(√(‖H_ C(x̅_ C-x̂_C)‖_2^2/2σ^2))}+1/2^ηP(M_ C<K) (b)≃ ∑_c=K^P 𝔼_M_ C=c{1/12exp(-N_ tN_ r/4Pσ^2∑_i=1^Kγ_i^2△ x_i^2). +1/4.exp(-N_ tN_ r/3Pσ^2∑_i=1^Kγ_i^2△ x_i^2) }+1/2^ηP(M_ C<K), where (b) is for that Q(x)≃1/12e^-x^2/2+1/4e^-2x^2/3. According to Eq. (<ref>), γ_i follows a unit exponential distribution. Assume that K out of c largest paths are selected, satisfying γ_1<γ_2⋯<γ_K. Thus the probability distribution of γ=[γ_1,⋯,γ_K]^ T is given by f(γ)=c!/(c-K)!(1-e^-γ_1)^c-K∏_i=1^Ke^-γ_i. Then the first item of Eq. (<ref>) is derived as ∫_0^+∞∫_γ_1^+∞⋯∫_γ_K-1^+∞f(γ)e^-N_ tN_ r/4Pσ^2∑_i=1^Kγ_i^2△ x_i^2dγ_1⋯γ_K = c!/(c-K)!∏_j=2^Kn_j∫_0^+∞e^-γ_1n_1(1-e^-γ_1)^c-Kdγ_1 = c!/(c-K)!∏_j=2^Kn_j𝔹(n_1,c-K+1), where n_j=∑_i=j^K(N_ tN_ r/4Pσ^2△ x_i^2+1) and 𝔹(p,q)=∫_0^1x^p-1(1-x)^q-1dx is the Beta function <cit.>. Similarly, we can obtain the second item of (<ref>). Then, the pairwise error probability arrives at Eq. (<ref>). IEEEtran
http://arxiv.org/abs/2405.08913v1
20240514190725
The quantum Mpemba effect in free-fermionic mixed states
[ "Filiberto Ares", "Vittorio Vitale", "Sara Murciano" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "quant-ph" ]
SISSA and INFN, via Bonomea 265, 34136 Trieste, Italy Univ. Grenoble Alpes, CNRS, LPMMC, 38000 Grenoble, France Department of Physics and Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, CA 91125, USA Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA 91125, USA Recently, a novel probe to study symmetry breaking, known as entanglement asymmetry, has emerged and has been utilized to explore how symmetry is dynamically restored following quantum quenches. Interestingly, it has been shown that, in certain scenarios, greater initial symmetry breaking leads to faster restoration, akin to a quantum Mpemba effect. This study focuses on investigating the effect of mixed initial states and non-unitary dynamics on symmetry restoration. The mixedness of a state can arise from different sources. We consider dephasing or dissipative processes affecting initial pure states or unitary dynamics of initially thermal states. In the former case, the stationary state after the quench is independent of the initial configuration, resembling the phenomenology of the classical Mpemba effect. Investigating the XY spin chain model, through a combination of analytical calculations and numerical simulations, we identify the conditions for the occurrence of the quantum Mpemba effect. It turns out that this phenomenon still occurs in the presence of dissipation or at finite temperature, even though it will be eventually suppressed as the state becomes more mixed. The quantum Mpemba effect in free-fermionic mixed states Sara Murciano May 20, 2024 ======================================================== § INTRODUCTION A question barely considered in the literature is how an initially broken symmetry in a many-body quantum system evolves in time under a dynamics that preserves it. In a very recent work <cit.>, this problem has been examined for a quantum quench: a spin-1/2 chain is initialized in a non-equilibrium state that breaks the U(1) particle number symmetry and is let evolve under the unitary dynamics described by the Hamiltonian of the XX spin chain. Since the chain undergoes a unitary evolution, it never relaxes globally and the symmetry is not restored in the whole system. However, any portion of it does relax in the thermodynamic limit into a stationary state that respects the symmetry. A surprising finding is that the restoration of it may be faster for those initial states that break it more. This phenomenon can be seen as a quantum version of the Mpemba effect <cit.>: the more a system is out of equilibrium, the faster it may relax. Although the Mpemba effect was for a long time viewed as a mere curiosity, today it is acknowledged as a genuine out-of-equilibrium phenomenon that can occur in many physical systems <cit.>, and is nowadays the subject of numerous papers, prompted by recent theoretical developments <cit.> and its observation in experimental controlled setups <cit.>. After Ref. <cit.>, the quantum Mpemba effect (QMPE) has been investigated in free fermionic systems, such as the XY spin chain <cit.>, long-range Kitaev chains <cit.>, or two-dimensional superconductors <cit.>, and in generic one-dimensional interacting integrable systems <cit.>, where the mechanisms and the criteria for its occurrence have been established. This phenomenon has been also reported in chaotic random unitary circuits <cit.>. In addition, it has been observed in experiments with an ion-trap quantum simulator that mimics the dynamics of a long-range XX spin chain <cit.>. In parallel, several theoretical <cit.> and experimental <cit.> works have explored the occurrence of other Mpemba effects in open few-body quantum systems connected to thermal reservoirs or undergoing different non-unitary dynamics. A key ingredient of the setups considered to observe the symmetry restoration is that the full system is always initially prepared in a pure state and it is isolated from the environment, evolving unitarily after the quench. It is thus natural to wonder how this phenomenon is affected when the state does not remain pure during the whole dynamics. In realistic conditions, quantum systems are indeed described by mixed states as they are subject to external noise or decoherence. The effects of non-unitary dynamics on symmetry restoration have been analyzed in the experimental work <cit.> and in the theoretical work <cit.>. In Ref. <cit.>, the external environment affects unavoidably the quantum state prepared in the experimental platform. In that setting, the main source of noise is global dephasing, which randomly rotates the spins around the z axis. It is found that the presence of global dephasing does not spoil the QMPE, which occurs even if the dephasing rate is arbitrarily large. Contrarily, if the system evolves only subject to the dephasing noise of the laboratory, neglecting the unitary dynamics, the symmetry is indeed restored but the QMPE is absent. The theoretical work <cit.> examines a quantum quench of a spin chain initially prepared in a tilted Néel state and undergoing an evolution described by the XX Hamiltonian in the presence of global gain and loss dissipation. Remarkably, in the pure unitary dynamics case, the particle number symmetry is not restored because the system locally relaxes to a non-Abelian generalized Gibbs ensemble (GGE) <cit.>. However, when the chain is connected to the environment via gain and loss terms, the symmetry is locally restored and the QMPE can occur. The goal of the present manuscript is to analyze the effects of both non-unitary time evolution and initial symmetry-breaking mixed states in the context of symmetry restoration and QMPE. We consider a paradigmatic model: the XY spin chain. This is the simplest but non-trivial instance of a system that explicitly breaks the U(1) particle number symmetry and includes the case of the XX spin chain – which respects it. As shown in Ref. <cit.>, this symmetry is dynamically restored for quantum states initialized in the ground state of the XY spin chain Hamiltonian and evolving under the XX one. For different values of the initial couplings, the QMPE can occur. Here we will study this quench in the presence of global gain and loss dissipation and, separately, of local dephasing. We will also consider the same quench dynamics initializing the chain at a finite temperature and keeping it disconnected from any environment after the quench. This setup is particularly useful for investigating how the mixedness of the initial configurations affects the symmetry restoration. To probe the non-equilibrium dynamics of the symmetry and the QMPE, we employ the entanglement asymmetry <cit.>. This is a quantum information based observable that captures the non-local correlations and measures how much a symmetry is broken in a part of an extended quantum system. Apart from being useful to study U(1) symmetries in quantum quenches, the entanglement asymmetry has been also applied to investigate discrete symmetries <cit.>, generic compact Lie groups in matrix product states <cit.>, CFTs with non-topological defects <cit.>, symmetry breaking Haar random states <cit.>, and confinement <cit.>. Here we calculate the entanglement asymmetry in the quantum quenches in the XY spin chain that we have mentioned; using the quasiparticle picture, we obtain the exact analytic expression for the time evolution of the asymmetry in the presence of gain and loss dissipation and when the system is initially prepared at a finite temperature. Moreover, we derive the conditions for the QMPE to occur in these cases. The paper is organized as follows. In Sec. <ref>, we introduce the entanglement asymmetry and we discuss how to calculate it from the charged moments of the reduced density matrix in the case of fermionic Gaussian states. We also review the main results obtained in Ref. <cit.> for the entanglement asymmetry and the QMPE in a quantum quench from the ground state of an XY spin chain to the XX chain. In Sec. <ref>, we study the effects on the entanglement asymmetry and the QMPE of gain and loss dissipation and local dephasing in this quench. In Sec. <ref>, we analyze the entanglement asymmetry in the XY spin chain at a finite temperature and then the time evolution after a quantum quench to the XX chain. In Sec. <ref>, we draw our conclusions and discuss future prospects. § ENTANGLEMENT ASYMMETRY AND CHARGED MOMENTS Let us start with an extended quantum system in a state described by a density matrix ρ, either mixed or pure, that can be divided into two spatial regions, A and B. The state of the system in A is given by the reduced density matrix ρ_A=Tr_B(ρ), obtained by tracing out the degrees of freedom of the complementary subsystem B. We consider that the dynamics of the system has a global additive conserved charge, Q=Q_A⊗1_B+1_A⊗ Q_B, that generates a U(1) symmetry group. If ρ respects this symmetry, then [ρ_A, Q_A]=0 and ρ_A displays a block-diagonal structure in the charge sectors of Q_A. In that case, a very active problem is to identify and compute the contributions to the entanglement of each symmetry sector <cit.>. On the other hand, if the state ρ_A breaks the symmetry, then [ρ_A, Q_A]≠ 0 and ρ_A is not block-diagonal. To measure how much ρ_A breaks the symmetry, we can use the entanglement asymmetry, defined as Δ S_A=S(ρ_A,Q)-S(ρ_A), where S(ρ)=- Tr(ρlogρ) is the von Neumann entropy associated to the density matrix ρ. The matrix ρ_A,Q is obtained from ρ_A as ρ_A,Q=∑_q∈ℤΠ_qρ_AΠ_q, where Π_q is the projector onto the eigenspace of Q_A with charge q. Thus ρ_A, Q is block-diagonal in the eigenbasis of Q_A. As in previous works, to calculate the asymmetry we can exploit the replica trick, and thus we resort to the Rényi entropies, S^(n)(ρ)=1/1-nlogTr(ρ^n), whose limit n → 1 gives the von Neumann entropy (Eq. (<ref>)). With this in mind, the definition of Eq. (<ref>) can be extended by replacing the von Neumann entropy S(ρ) with the Rényi entropies S^(n)(ρ), Δ S_A^(n)=S^(n)(ρ_A, Q)-S^(n)(ρ_A). The main advantage of the Rényi entanglement asymmetry is that it is easier to calculate for integer n≥ 2, and Eq. (<ref>) can be recovered by taking the limit lim_n→ 1Δ S_A^(n)=Δ S_A. Moreover, for integer n≥ 2, it can be experimentally accessed <cit.> via randomized measurements <cit.>. As already discussed in Ref. <cit.>, both the von Neumann (<ref>) and Rényi (<ref>) entanglement asymmetries satisfy two fundamental properties as measures of symmetry breaking: (i) they are non-negative, Δ S_A^(n)≥ 0, (ii) they vanish, Δ S_A^(n)=0, iff the state of subsystem A respects the symmetry associated to Q, namely when [ρ_A, Q_A]=0. §.§ Charged moments The calculation of the Rényi entanglement asymmetries (<ref>) for integer n≥ 2 is usually simpler because they can be expressed in terms of the charged moments of ρ_A, which can be explicitly computed in several settings, e.g. for fermionic Gaussian states <cit.>, in CFTs <cit.>, or for Haar random states <cit.>. Using the Fourier representation of the projectors Π_q, ρ_A, Q can be rewritten as ρ_A, Q=∫_-π^π dα/2πe^-iα Q_Aρ_A e^iα Q_A. Therefore, its moments are given by Tr(ρ_A, Q^n)=∫_-π^π dα_1… dα_n/(2π)^n Z_n(α), where α={α_1,…,α_n} and Z_n(α) are the (generalized) charged moments Z_n(α)= Tr[∏_j=1^nρ_A e^iα_j,j+1Q_A], with α_ij≡α_i-α_j and α_n+1=α_1. We show in the next section how Z_n(α) can be efficiently computed for fermionic Gaussian states. §.§ Unitary dynamics at zero temperature In order to analyze the results that we find in the manuscript, it is useful to recall the behavior of the entanglement asymmetry in the quantum quench from the XY to the XX spin chain studied in <cit.>. In that work, the chain is initialized in the ground state |Ψ(0)⟩ of the XY spin chain of N sites H_ XY=-1/4∑_j=1^N[(1+δ)σ^x_j σ^x_j+1+(1-δ)σ^y_j σ^y_j+1 +2h σ_j^z], which breaks the U(1) symmetry generated by the transverse magnetization Q=1/2∑_jσ^z_j. Here σ_j^β are the Pauli matrices at the site j, δ≠ 0 is the anisotropy parameter breaking the U(1) symmetry and h is the value of the external transverse magnetic field. The Hamiltonian in Eq. (<ref>) can be written in terms of the fermionic operators a_j=(a_j^†, a_j) via a Jordan-Wigner transformation, which yields H_ XY=-1/2∑_j=1^N(a^†_j a_j+1+δ a^†_ja^†_j+1 +h.c.+2h a^†_ja_j). We let |Ψ(0)⟩ evolve with the XX spin chain Hamiltonian H_ XX, which corresponds to Eq. (<ref>) with δ=h=0, |Ψ(t)⟩= e^-i tH_ XX|Ψ(0)⟩. We observe that [H_ XX,Q]=0 and this time evolution leads to a dynamical restoration of the U(1) symmetry in the subsystem A. The reduced density matrix ρ_A(t) corresponding to the time-evolved state (<ref>) is Gaussian in terms of the fermionic variables a_j. Thus the charged moments Z_n(α) defined in Eq. (<ref>) can be expressed through the two-point correlation function <cit.> Γ_jj'=2Tr[ρ_Aa_j^†a_j']-δ_jj', with j,j'∈ A. For the quench (<ref>), this matrix can be explicitly computed and, in the thermodynamic limit N→∞, it reads Γ_jj'=∫_0^2π dk/2π𝒢_0(k,t)e^-ik(j-j'), j,j'=1,…,ℓ, with ℓ the size of the subsystem A. This expression shows that Γ is a block Toeplitz matrix with symbol 𝒢_0(k,t) the 2× 2 matrix 𝒢_0(k,t)= cosΔ_k σ^z+sinΔ_k σ^y e^2it ϵ_k σ^z, where cosΔ_k =h-cos k/√((h-cos k)^2+δ^2sin^2 k), sinΔ_k =δsin k/√((h-cos k)^2+δ^2sin^2 k), and ϵ_k=-cos(k) is the single-particle dispersion relation of the XX Hamiltonian. In terms of the two-point correlation matrix Γ, the charged moments are given by the determinant <cit.> Z_n(α)=√([(I-Γ/2)^n (I+∏_j=1^n W_j)]), with W_j=(I+Γ)(I-Γ)^-1e^iα_j,j+1 n_A and n_A is a diagonal matrix with (n_A)_2j,2j=1, (n_A)_2j-1,2j-1=-1, j=1, …, ℓ. Combining (<ref>) and some properties of the determinants of block Toeplitz matrices, the full time evolution of the charged moments can be exactly obtained in the hydrodynamic limit t,ℓ→∞ with ζ=t/ℓ fixed. In that regime, they read Z_n(α, t)= Z_n(0, t)e^B_n(α, ζ)ℓ, where B_n(α, ζ)=∫_0^2π dk/4π x_k(ζ)∑_j=1^n f(cosΔ_k, α_j, j+1), with x_k(ζ)=1-min(2|v_k|ζ, 1), v_k=ϵ'_k, and f(λ, α)=log(iλsinα+cosα). This expression can be interpreted using the quasiparticle picture of quantum quenches <cit.>. In the quench (<ref>), pairs of entangled quasiparticles are produced and spread ballistically with opposite momentum and velocities v_± k. The key tenet of the dynamics of the charged moments (and of the entanglement entropy) is how the members of the entangled pair are shared between the subsystem A and the rest. Here, the main difference with respect to the standard quasiparticle picture of the entropy is that only configurations in which the two entangled quasiparticles are in the subsystem contribute to the ratio Z_n(α, t) /Z_n(0, t). Since the quasiparticles forming the pair have opposite velocities, at long times they cannot be simultaneously in A. In fact, in Eq. (<ref>), the term x_k(ζ) accounts for the number of entangled pairs of momentum ± k contained in the subsystem A at time t while the rest of the integrand represents their contribution to Z_n(α, t) /Z_n(0, t). This picture has been extended to quantum quenches that generate entangled multiplets <cit.> of arbitrary size in Ref. <cit.>; for example, initial configurations that break translational invariance, in which the symmetry may be not restored <cit.>. From the charged moments (<ref>), we can compute the time evolution of the Rényi entanglement asymmetries by inserting them in Eq. (<ref>). In particular, at time t=0, they grow logarithmically with the subsystem size, Δ S_A^(n)(δ,h,t=0)=1/2logℓ + 1/2logπ g(δ, h)n^1/n-1/4+ O(ℓ^-1). In this expression, the coefficient of the logℓ term is related to the dimension of the symmetry group <cit.> and g(δ, h)=∫_0^2π dk/2πsin^2 Δ_k(δ,h). As noted in Ref. <cit.>, sinΔ_k is the mode occupation of Cooper pairs in the ground state of the XY spin chain (<ref>). Therefore, according to Eq. (<ref>), the initial entanglement asymmetry increases with the number of Cooper pairs that the chain contains in the ground state. This a quite natural result since the pairing term in the Hamiltonian (<ref>) that breaks the U(1) particle number symmetry induces the creation of Cooper pairs. On the other hand, at times t≫ℓ, the Rényi entanglement asymmetries behave as Δ S_A^(n)(δ,h,t)≃nℓ/1-n∫_0^2π dk/16π x_k(ζ)sin^2 Δ_k (δ,h). As a consequence, at late times after the quench, the entanglement asymmetry is governed by the slowest Cooper pairs that are still inside the subsystem A. Now we can take two initial ground states, |Ψ_1(0)⟩ and |Ψ_2(0)⟩, corresponding to different couplings, (δ_1, h_1) and (δ_2, h_2), of the XY spin chain (<ref>), for which Δ S_A^(n)(δ_1, h_1, t=0)>Δ S_A^(n)(δ_2, h_2, t=0). In other words, |Ψ_1(0)⟩ breaks more the U(1) particle symmetry than |Ψ_2(0)⟩. It can occur that there is a time t_I after which the relation (<ref>) is inverted, Δ S_A^(n)(δ_1, h_1, t)<Δ S_A^(n)(δ_2, h_2, t), t>t_I. This means that the entanglement asymmetries of |Ψ_1(0)⟩ and |Ψ_2(0)⟩ intersect at a certain finite time and the symmetry is restored faster for |Ψ_1(0)⟩, indicating that there is quantum Mpemba effect. Although for some fine tuned initial states the entanglement asymmetries can exhibit multiple intersections at finite time <cit.>, here we will assume that only one intersection can occur. Eqs. (<ref>) and (<ref>) are the necessary and sufficient conditions for the occurrence of this phenomenon. Using (<ref>) and (<ref>) respectively, we can re-express them in terms of the density of Cooper pairs in the initial states |Ψ_1(0)⟩ and |Ψ_2(0)⟩ <cit.> ∫_0^2π dk/2πsin^2Δ_k(δ_1,h_1)>∫_0^2π dk/2πsin^2Δ_k(δ_2,h_2), ∫_-k^*_ζ^k^*_ζ dk/2πΥ_k(δ_1,h_1)<∫_-k^*_ζ^k^*_ζ dk/2πΥ_k(δ_2,h_2), for t>t_I, where Υ_k(δ,h)=sin^2Δ_k(δ,h)+sin^2Δ_k+π(δ,h) and k^*_ζ=arcsin(1/(2ζ)). These two conditions show that the QMPE can be observed when the less asymmetric initial state, that contains a smaller number of Cooper pairs, has instead a larger density of slowest Cooper pairs, those with momenta around k=0 and k=π. § GLOBAL QUENCH UNDER DISSIPATIVE EVOLUTION After this brief review of the behavior of the entanglement asymmetry under a unitary time evolution, we ask what happens if in the quench (<ref>) there are dissipative effects and the evolution is non-unitary. In particular, we assume that the non-equilibrium dynamics of the total density matrix ρ(t) is now described by the Lindblad equation <cit.> dρ(t)/ dt= ℒ(ρ(t))=-i[H_ XX,ρ(t)] +∑_j=1^N∑_α=±(L_j,αρ(t) L_j,α^†-1/2{L_j,α^† L_j,α,ρ(t)}) . In this formula, H_ XX is the Hamiltonian of the XX spin chain and the dissipation is encoded in the Lindblad jump operators L_j,α, which in our case will be of the form L_j,-=√(γ_-)a_j, L_j,+=√(γ_+)a_j^†, and L_j,z=√(γ_d)a_j^† a_j. The first two correspond to the creation and annihilation of fermions in all sites at rates γ_+ and γ_-, respectively, and they preserve the Gaussianity of ρ(t) during the dynamics; therefore, we can employ Eq. (<ref>) to study the entanglement asymmetry. We remark that, recently, it has been found that the entanglement dynamics in systems modeled by quadratic fermionic and bosonic Lindblad master equations is described by the quasiparticle picture <cit.>. The last jump operator describes a dephasing with rate γ_d. In this case, the Liouvillian in Eq. (<ref>) is no longer quadratic and, in principle, we cannot exploit the machinery of the previous sections, only valid for Gaussian states. However, following the approach applied in Ref. <cit.> to study the entanglement entropy and the negativity in this scenario, we will neglect the deviations from the Gaussian behavior, and we will compute the Rényi entanglement asymmetry with the two-point correlation matrix using the formula (<ref>). As we will see, the time evolution of these correlation functions can be obtained efficiently, at least numerically. Let us first study the case of a dissipative evolution with gain and loss but no dephasing. In the second part of this section, we investigate the effects of dephasing with no gain and loss terms. §.§ Gain and loss dissipation §.§.§ Charged moments In the presence of only linear gain and loss terms, the time evolved total density matrix ρ(t) is Gaussian and, therefore, it is univocally determined by the two-point fermionic correlation functions (<ref>). Using the results of Refs. <cit.>, one can find that, in this kind of quench, these correlations are of the form (<ref>) with the symbol 𝒢_0(k,t) replaced by 𝒢(k,t)= λ(t) (cosΔ_k σ^z+sinΔ_k σ^y e^2it ϵ_k σ^z) - γ_–γ_+/γ_++γ_-(1-λ(t))σ^z, and λ(t)=e^-(γ_++γ_-) t. The first step to study the entanglement asymmetry is to derive the time evolution of the charged moments Z_n(α, t) in the weakly-dissipative hydrodynamic limit t, ℓ→∞, γ_+,γ_- → 0 with fixed t/ℓ, γ_+ t, γ_-t. As shown in Ref. <cit.> in a quench to the XX spin chain from the Néel state with similar gain and loss terms, the quasiparticle picture for the charged moments discussed in Sec. <ref> remains valid in this regime but the dissipation modifies the contribution of the excitations to the charged moments. To determine how this contribution changes, we analyze the behavior of the charge moments in the short and large time limits. At time t→ 0, the symbol (<ref>) reads 𝒢(k,t→ 0) = λ(t)(cosΔ_k σ^z+sinΔ_k σ^y ) -γ_–γ_+/γ_++γ_-(1-λ(t))σ^z, where we keep the terms e^-(γ_++γ_-)t because they are constant in the weakly-dissipative hydrodynamic limit. As t→∞, the terms e^± 2itϵ_k average to zero in Eq. (<ref>) and the symbol reduces to 𝒢(k,t→∞)= λ(t)cosΔ_k σ^z-γ_–γ_+/γ_++γ_-(1-λ(t))σ^z. We can now exploit in Eq. (<ref>) the block Toeplitz structure of the two-point fermionic correlation function to compute the charged moments in these two limits. From the results of Ref. <cit.>, we know that, if T_ℓ[g] is a (2ℓ )× (2ℓ ) dimensional block Toeplitz matrix with symbol a 2× 2 matrix g, then for large ℓ (I+∏_j=1^nT_ℓ[g_j] T_ℓ[g_j']^-1)∼ e^ℓ A, where the exponent A is given by A=∫_0^2π dk/2πlog[I+∏_j=1^n g_j(k)g_j'(k)^-1]. Employing this formula in Eq. (<ref>) with the symbol (<ref>), we derive the stationary value of Z_n(α,t) at large times, log Z_n(α, t→∞)∼ℓ/2∫_0^2π dk/2πlogℳ_α^(n)(k,t→∞), where ℳ_α^(n)(k, t) is the 2× 2 matrix ℳ_α^(n)(k, t)=(I-𝒢(k,t)/2)^n [I+∏_j=1^nI+𝒢(k,t)/I-𝒢(k,t)e^iα_j, j+1σ^z]. We observe that, in the large time limit, the symbol in Eq. (<ref>) commutes with σ^z and the dependence on α_j,j+1 in ℳ_α^(n)(k, t→∞) simplifies. In fact, after some algebra, Eq. (<ref>) becomes log Z_n(α, t→∞)∼ℓ∫_0^2π dk/2πh_n(n(k, t)), with h_n(x)=log[x^n+(1-x)^n] and n(k, t)=1/2[1-λ(t)cosΔ_k +γ_–γ_+/γ_++γ_-(1-λ(t))] is the density of occupied modes with momentum k <cit.>. According to Eq. (<ref>), in the large t limit, the charged moments do not depend on α_j,j+1. This is a very strong result since, as we will see later, it implies that Δ S_A^(n)(t)→ 0 when t→∞, indicating that the particle number symmetry is restored in A in the stationary state. Following the same reasoning, we compute the charged moments in the limit t→ 0, by applying in Eq. (<ref>) the formula (<ref>) with the symbol (<ref>). We get log Z_n(α, t→0)∼ℓ/2∫_0^2π dk/2πlogℳ_α^(n)(k,t→0). We can now determine the contribution of the quasiparticles created in the quench to the charged moments from the difference between their short and long time limits, logZ_n(α,t→∞)/Z_n(α, t→0)∼log Z_n(0, t→∞) -ℓ/2∫_0^2π dk/2πlogℳ_α^(n)(k,t→0), where we took into account that Z_n(α, t→∞)∼ Z_n(0, t→∞). Within the quasiparticle picture, the factor ℓ in the right-hand side of this result should be interpreted as the number of entangled pairs shared by A and B at t→∞. Consequently, this expression can be extended to any time step t by taking into account the number of entangled excitations between A and B at each moment. Assuming that the quasiparticle velocity v_k is not affected by the dissipation, this can be done by inserting the function min(2ζ|v_k|, 1) in the momentum integrals of the right hand side of Eq. (<ref>). We then conclude that, in the weakly-dissipative hydrodynamic regime, the charged moments evolve as Z_n(α, t)= Z_n(0,t)e^B_n(α, ζ)ℓ, where B_n(α, ζ)=∫_0^2π dk/4πx_k(ζ)log[ℳ_α^(n)(k, t→0)/ℳ_0^(n)(k, t→ 0)] and Z_n(0,t)=ℓ∫_0^2π dk/2πmin(2ζ |v_k|,1)h_n(n(k)) +ℓ/2∫_0^2π dk/2πx_k(ζ)logℳ_0^(n)(k, t→ 0). We can compare this result with the one in Eq. (<ref>) for the case of a unitary quench with no dissipation. Both expressions are formally identical and the only difference resides in the integrand of the time dependent exponent B_n(α, ζ). Since the term x_k(ζ) remains untouched, only those entangled pairs with both particles inside the subsystem A at time t count but, as already announced, their contribution is altered by the dissipation. While for the pure unitary case, it is determined by the two point correlations of the initial state, here it is encoded in the matrix ℳ_α^(n)(k, t→0), which is given by the short-time two-point correlations (<ref>) in the weakly-dissipative limit. Unlike the case without dissipation, the contribution of the quasiparticle pairs is now time dependent. We note that, if we take γ_±=0 in Eq. (<ref>), we recover the non-dissipative result (<ref>). Eq. (<ref>) is the time evolution of the (neutral) moments of ρ_A in the presence of linear gain and loss dissipation found in Ref. <cit.>. We now want to provide an explicit analytical expression for Eq. (<ref>), which amounts to computing the early time behavior of the charged moments. Unfortunately, this is not an easy task except for some particular cases that we are going to examine. We start with n=2, for which a straightforward calculation of the determinant of the matrix ℳ_α^(n)(k, t→0) in Eq. (<ref>) leads to B_2(α,ζ)∼1/2∫_0^2π dk/2π x_k(ζ)log[1-sin^2αλ^2(t)(γ_++γ_-)^4sin^2Δ_k/(γ_-^2+γ_+^2)(1+λ^2(t))-λ(t)(γ_–γ_+)^2+(1-λ(t))(γ_-^2-γ_+^2)cos^2Δ_k]. For larger integer values of n, we can still find an expression of the charged moments using Eq. (<ref>). However, the explicit formula becomes more and more cumbersome and it is not as compact as Eq. (<ref>) unless we require that gain and loss are balanced, γ_+=γ_-, and we take the large time regime, in which λ(t)=e^-2γ_+ t≪ 1. In that case, using the result in Eq. (<ref>), we find that the leading order behavior in this regime is given by B_n(α,ζ) ∼1/2∫_0^2π dk/2π x_k(ζ) ×log[1-4 λ(t)^2sin^2Δ_k ∑_i<jsin^2(α_ij) ]. The main obstruction to finding a closed formula for the charged moments beyond the regime λ(t)≪ 1 is that, as also occurs for the balanced gain and loss case (<ref>), the coefficient B_n(α, ζ) in Eq. (<ref>) does not admit a factorization in the replica space, as happens in the absence of dissipation, cf. Eq. (<ref>). §.§.§ Entanglement asymmetry Plugging the result (<ref>) for the charged moments in Eq. (<ref>), we can compute the entanglement asymmetry at any time step t and any dissipation γ_±≪ 1, as we show in Fig. <ref>. In that figure, we study the time evolution of the n=2 Rényi entanglement asymmetry after a quench from the ground state of different XY spin chains with balanced (γ_+=γ_-, left panel) and unbalanced gain and loss rates (γ_+≠γ_-, right panel). The solid curves correspond to the quasiparticle prediction (<ref>) and the symbols are the exact numerical value calculated using the determinant formula (<ref>). We obtain an excellent agreement. To make a comparison, we also report the result due to a non-dissipative dynamics as dashed lines. The main effect of the dissipation is to diminish the entanglement asymmetry. As a consequence, the crossings of the entanglement asymmetry curves for the different initial states may be affected, undermining in some cases the QMPE. For example, the entanglement asymmetries for the initial couplings (δ, h)=(0.5, 1.2) and (0.6, 0.5) intersect in the absence of dissipation and there is QMPE. As we can see in the plots, this intersection remains for balanced gain and loss but it disappears in the unbalanced case. On the other hand, for the pair of initial parameters (0.6, 0.5) and (0.6, 1.1), their entanglement asymmetries intersect both in the absence of dissipation and for balance and unbalanced gain and loss for the rates considered in Fig. <ref>. As we are going to discuss, a balanced gain and loss, even though suppressing the entanglement asymmetry, does not spoil the QMPE. However, in the unbalanced case, the existence of the QMPE will depend on the relative strength of the dissipation rates and the initial ground states. To see this, we have to study the long time behavior of the entanglement asymmetry. Let us start with the balanced gain and loss dissipation. In this case, the time-evolved charged moments are given by Eqs. (<ref>) and (<ref>). When t→∞, we observe that x_k(ζ)→ 0 and, consequently, B_n(α,ζ)→ 0. Therefore, as done in detail in Ref. <cit.>, we can expand the exponential function in Eq. (<ref>) using (<ref>) up to the first order term in ℓ and, inserting it in Eq. (<ref>), we compute analytically the integral in α. We find Δ S_A^(n)(t) ≃λ(t)^2 ℓ/n-1n2[ ∫_-k^*_ζ^k^*_ζ dk/2π(1-2ζ|v_k|)sin^2Δ_k . +.∫_π-k^*_ζ^π+k^*_ζ dk/2π (1-2ζ|v_k|)sin^2Δ_k ], with k^*_ζ=arcsin(1/(2ζ)). If we perform the change of variables k'=k-π in the second integral, we get Δ S_A^(n)(t)≃λ(t)^2 ℓ n/2∫_-k^*_ζ^k^*_ζ dk/2π(1-2ζ|v_k|)Υ_k(δ, h). This expression should be compared to the one in Eq. (<ref>) for large times in the non-dissipative case. The main difference is that the balanced gain and loss dissipation introduces the factor λ(t), which makes the entanglement asymmetry exponentially decaying in time, instead of algebraically as in the absence of dissipation <cit.>. The same exponential decay has been also found in Ref. <cit.>, studying the entanglement asymmetry in a setup similar to ours, but starting from the tilted Néel state rather than the ground state of the XY spin chain. With Eq. (<ref>), we can see how the criteria (<ref>) for the occurrence of the QMPE change. Remarkably, using it in the condition (<ref>), we find that a balanced gain and loss dissipation does not alter them, since the global factor λ(t) does not depend on the initial state, and cancels out. Another case where we can compute analytically Δ S_A^(n)(t) is when γ_+≠γ_- and n=2. Using Eq. (<ref>) and performing the same large time expansion as in Eq. (<ref>), we get Δ S_A^(2)(t)≃ℓ/4λ(t)^2(γ_++γ_-)^4 ∫_-k^*_ζ^k^*_ζ dk/2π(1-2ζ|v_k|)Υ̃_k(δ, h), where Υ̃_k(δ,h) =sin^2Δ_k(δ,h)/(γ_-^2+γ_+^2)+(γ_-^2-γ_+^2)cos^2Δ_k(δ,h) + sin^2Δ_k(δ,-h)/(γ_-^2+γ_+^2)+(γ_-^2-γ_+^2)cos^2Δ_k(δ,-h). When γ_+=γ_-, the expression above reduces to Eq. (<ref>) for n=2. Unlike for balanced gain and loss, the second condition in Eq. (<ref>) for observing the QMPE gets modified if γ_+≠γ_-. In fact, inserting Eq. (<ref>) into the condition (<ref>), we obtain that the QMPE occurs if ∫_0^2π dk/2πsin^2Δ_k(δ_1,h_1)>∫_0^2π dk/2πsin^2Δ_k(δ_2,h_2), ∫_-k^*_ζ^k^*_ζ dk/2πΥ̃_k(δ_1,h_1)<∫_-k^*_ζ^k^*_ζ dk/2πΥ̃_k(δ_2,h_2), for t>t_I. Now the existence of QMPE does not depend only on the density of Cooper pairs sinΔ_k of the initial configurations, but also on the gain and loss rates. This is also evident from the right panel of Fig. <ref>: the crossing between the asymmetries occurs at different times with respect to the unitary evolution and, in some cases, it disappears (cf. the data for δ=0.5 and δ=0.6). We observe that, if we consider a dissipative regime with only gain terms (i.e. γ_-=0), then Υ̃(δ,h)=2/γ_+^2 in Eq. (<ref>). Since the dependence of Δ S_A^(2)(t) on the initial state simplifies, it means that the second condition in Eq. (<ref>) never occurs. As a consequence, we can conclude that, in the presence of only gain terms, the QMPE disappears. However, the fact that the entanglement asymmetry at large times is independent of how much the symmetry is initially broken is reminiscent of the weak quantum Mpemba effect found in Ref. <cit.>. We conclude this section by commenting the nature of the stationary state of subsystem A. In the thermodynamic limit N→∞, the reduced-density matrix of any finite subystem A tends after the quench to a generalized Gibbs ensemble <cit.>. Since ρ_A(t) is Gaussian, this ensemble is univocally determined by the density of occupied modes n(k, t)=⟨Ψ(t)|c^†_kc_k|Ψ(t)⟩, reported in Eq. (<ref>), in the long time limit. In the unitary case, n(k, t) is conserved by the dynamics and, therefore, is fixed by the initial state. In fact, from Eq. (<ref>), we find that, if γ_±=0, n(k)=(1-cosΔ_k)/2. This implies that the stationary state of A is in that case different for each pair of initial couplings (δ, h). However, in the presence of dissipation, n(k, t) varies with time and, according to Eq. (<ref>), in the long-time limit γ_± t→∞ it reads n(k, t→∞)=1/2[1+γ_–γ_+/γ_-+γ_+], i.e. it is independent of the initial couplings (δ, h) <cit.>. As a consequence, the stationary state in the presence of gain and loss terms does not depend on the initial condition of our quench. The subsystem A relaxes to the same ensemble for fixed γ_± and any (δ, h), this is the case of the quenches considered in each panel of Fig. <ref>. Despite the steady state is the same and does not depend on the initial amount of symmetry breaking, we can still observe the QMPE, as the discussion above and Fig. <ref> have proven. This scenario is more similar to the one usually considered in the classical Mpemba effect, like in Ref. <cit.>, in which the relaxation of different initial states to a common equilibrium state is studied. §.§ Local dephasing So far, we have focused our attention on a dissipative evolution induced by gain and loss terms. We now want to study what is the effect of another source of dissipation, which is the local dephasing, modeled by the jump operator L_j,z=√(γ_d)a^†_ja_j. In this case, the dynamics does not respect the Gaussianity of the initial state. However, we can take as an approximation of ρ(t) the Gaussian state determined by the two-point correlation matrix (<ref>). Under this assumption, we can calculate the charged moments and, therefore, the entanglement asymmetry with Eq. (<ref>). To this end, we will first derive the equations of motion for the two-point functions G_mn=⟨ a^†_ma_n ⟩ and F_mn=⟨ a^†_ma^†_n ⟩ in systems of free fermions with dephasing, building upon the findings in Ref. <cit.>. Our starting point is the introduction of 2N Majorana fermions, expressed as c_2m-1=a_m+a_m^†, c_2m=i(a_m-a_m^†), which satisfy the anticommutation relations { c_k , c_l } =2 δ_k,l for all k,l=1,…, 2N. In terms of them, the most general quadratic Hamiltonian and Lindblad operators are of the form H=i/4∑_k,l=1^2NH_kl c_k c_l , L_α=i/4∑_k,l=1^2NL_α,kl c_k c_l, where the condition H_lk=-H_kl is required to be the Hamiltonian H Hermitian. Furthermore, it is convenient to introduce ordered strings of Majorana operators Γ_ν=c_1^ν_1 …c_2N^ν_2N, ν_i ∈{ 0,1 } where ν = (ν_1,…,ν_2N) denotes whether the corresponding Majorana operator c_i is present in the string Γ_ν. Finally, one can define superoperators ĉ_j, ĉ_j^† acting on these strings that create or annihilate Majorana operators at position j as ĉ_j Γ_ν = δ_1,ν_jπ_jΓ_ν', ĉ_j^† Γ_ν = δ_0,ν_jπ_jΓ_ν', ν'_i = { [ 1-ν_i, i=j; ν_i, i j ] . The sign factor π_j= exp( iπ∑_k=1^j-1ν_k) is introduced to guarantee that they verify the canonical anticommutation relations {ĉ_i , ĉ_j^†} = δ_i,j and {ĉ_i , ĉ_j }=0. As already seen in Refs. <cit.>, using these superoperators, the Liouvillian defined in Eq. (<ref>) can be rewritten as ℒ^† = -∑_k,l=1^2N H̃_kl ĉ^†_k ĉ_l + 1/2 ∑_i,j,k,l=1^2N ∑_α L^T_α,ij L_α,kl ĉ^†_iĉ^†_k ĉ_jĉ_l provided the Lindblad operators satisfy L_α^† = L_α for all α. In this expression, H̃_kl are the entries of the matrix H̃= H+1/2∑_α L^T_α L_α, that includes the Hamiltonian evolution, in our case given by Eq. (<ref>) with δ=0, h=0, and a damping term, responsible for the decay of the n-point functions. This part, which is of the same form as in the case of linear gain and loss dissipation, is quadratic and respects the Gaussianity of the initial state. However, the second term in Eq. (<ref>) is quartic in the superoperators and, consequently, the time evolved density matrix ρ(t) is not Gaussian. In general, the Liouvillian (<ref>) cannot be diagonalized, although in the case of local dephasing it further simplifies, allowing us to calculate the evolution of the two-point correlators efficiently. The ingredient that we need for deriving the equations of motion for them is computing the action of the Linbladian (<ref>) on any string of Majorana operators ℒ(Γ_ν). Following the prescriptions in Ref. <cit.>, a simple calculation yields to the following Liouvillian ℒ_a = ∑_m [ -i ( â_m^†â_m+1 + â_m+1^†â_m ) -γ_d/2 â_m^†â_m ], ℒ_b = ∑_m [ i ( b̂_m^†b̂_m+1 + b̂_m+1^†b̂_m ) -γ_d/2 b̂_m^†b̂_m ], ℒ_ab = -∑_m γ_d b̂_m^†b̂_m â_m^†â_m, where we have introduced the canonical transformation for the superoperators: â_m = 1/√(2)(ĉ_2m-1 - iĉ_2m), b̂_m = 1/√(2)(ĉ_2m-1 + iĉ_2m). Then one needs to evaluate the action of all the elements in Eq. (<ref>) on a_m^†a_m and a_m^†a^†_n. Observing that â_m a^†_k=1/√(2)δ_mk, â_ma_k=0, â^†_m=√(2)a^†_m, b̂_m a^†_k=0, b̂_ma_k=1/√(2)δ_mk, b̂^†_m=√(2)a_m, one gets / t G_m,n = -i ( G_m-1,n + G_m+1,n - G_m,n-1 - G_m,n+1) - γ_d( G_m,n-δ_mn G_m,n), and similarly /t F_m,n = -i ( F_m-1,n + F_m+1,n ) - γ_d/2 F_m,n, where we have used / tG_m,n=⟨ℒ_d(a_m^†a_m)⟩ and / tF_m,n=⟨ℒ_d(a_m^†a^†_m)⟩. Since both Eqs. (<ref>) and (<ref>) are linear first order differential equations, they can be treated efficiently as a matrix eigenvalue problem. By numerically solving them, we can build the correlation matrix defined in Eq. (<ref>), from which we compute the charged moments using Eq. (<ref>). Plugging this result in Eq. (<ref>), we obtain the main result of this section, which is the time evolution of the entanglement asymmetry in the presence of local dephasing. We show the dynamics of the n=2 Rényi entanglement asymmetry in Fig. <ref>, where each panel corresponds to a different value of the dephasing rate γ_d=0.01,0.02,0.05. In all the panels, we consider a subsystem A of length ℓ=20 in a system of size N=10ℓ and four initial conditions, corresponding to the ground state of the XY Hamiltonian in Eq. (<ref>) with different parameters δ and h. For comparison, we report the unitary dynamics without dissipation (γ_d=0) as dashed lines. As in the case of gain an loss, see Fig. <ref>, the local dephasing induces a decrease in the entanglement asymmetry. This effect is more marked for the initial states with h>1 than for those with h<1, for which we need larger dephasing rates to see a clear difference with respect to the unitary dynamics (dashed lines). As a result, the time at which the entanglement asymmetries intersect may be shifted towards larger times. A similar suppression of the entanglement asymmetry with a global dephasing has been observed in the experimental work <cit.>, where an ion trap simulates the dynamics of a XX spin chain with long-range couplings that is initially prepared in a tilted ferromagnetic state. These configurations are precisely the ground state of the XY Hamiltonian (<ref>) along the curve δ^2+h^2=1 <cit.>. In the experimental setting, the crossing of the curves describing the entanglement asymmetry remains almost unaffected. § GLOBAL QUENCH FROM A FINITE TEMPERATURE STATE In this section, we consider the evolution of the entanglement asymmetry in a quantum quench from the XY spin chain (<ref>) at a finite temperature 1/β to the XX spin chain. We keep the whole system isolated from the environment. In this case, the system is initially described by the Gibbs ensemble ρ_β=e^-β H_ XY/Z, where Z= Tr(e^-β H_ XY). After the quench, this state evolves unitarily as ρ_β(t)=e^-it H_ XXρ_β e^itH_ XX. Since both the initial state ρ_β and the post-quench Hamiltonian are Gaussian, the dynamics of the system is fully encoded in the spatial two-point fermionic correlations (<ref>). For the thermal state, the two-point correlation matrix in the thermodynamic limit N→∞ is also a block Toeplitz matrix of the form of Eq. (<ref>) where 𝒢_0(k, t) is replaced by the 2× 2 symbol 𝒢_β(k, t)=C_β, k(cosΔ_k σ^z + sinΔ_k σ^y e^2it ϵ_k σ^y), and C_β, k=tanh(βϵ_k^ XY/2) and ϵ_k^ XY is the dispersion relation of the Hamiltonian of the XY spin chain (<ref>), ϵ_k^ XY=√((h-cos k)^2+δ^2sin^2k). §.§ Entanglement asymmetry at finite temperature Let us first analyze the entanglement asymmetry in the Gibbs ensemble ρ_β. To obtain the asymptotic behavior of the charged moments Z_n(α, β), we can apply its expression (Eq. (<ref>)) in terms of the two-point correlation matrix together with Eq. (<ref>), specialized to the finite temperature symbol (Eq. (<ref>)) at t=0. We find that Z_n(α, β)= Z_n(0, β) e^B_n(α, β)ℓ, where the coefficient B_n(α, β) reads for n=2 B_2(α, β)=∫_0^2π d k/4πlog[1-4sin^2αsin^2Δ_k/(C_β, k^-1+C_β, k)^2] and, for n=3, B_3(α, β)=∫_0^2π dk/4πlog[1 - 4 C_β, k^2(1 + C_β, k^2) sin^2Δ_k ∑_j=1^3sin^2α_j, j+1 - 16 i cosΔ_ksin^2Δ_k C_β, k^3 ∏_j=1^3sinα_j, j+1/(1 + 3 C_β, k^2)^2]. The expression of B_n(α, β) is more and more involved as we increase n and we have not been able to get a closed analytic form for integer n as in the zero temperature limit. In fact, when β→∞, one recovers the result for the ground state, Eq. (<ref>) at t=0, which decomposes in the sum over the replicas n that we consider. Unfortunately, at finite β, B_n(α,β) does not generically satisfy such decomposition, as also happens for Eq. (<ref>). To obtain the Rényi entanglement asymmetry from the charged moments in Eq. (<ref>), we plug them into Eq. (<ref>). The n-fold integral can be exactly computed for large subsystem size ℓ doing a saddle point approximation. The calculation follows the same steps as in the ground state case described in detail in Ref. <cit.>. The final result is Δ S_A^(n)(δ, h,β)=1/2logℓ +1/2logπ g_β^(n)(δ, h)n^1/1-n/4+O(ℓ^-1). As in the ground state, see Eq. (<ref>), the entanglement asymmetry grows logarithmically with the subsystem size ℓ and the same coefficient 1/2, related to the dimension of the U(1) group. The temperature modifies the ℓ-independent function g_β^(n)(δ, h), cf. Eq. (<ref>), which now depends on the Rényi index n and the temperature in a non-trivial way, as a consequence of the non factorization of the charged moments (Eq. (<ref>)) in the replica space. For n=2, we obtain g_β^(2)(δ, h)=∫_0^2π dk/2πtanh^2(βϵ_k^ XY)sin^2Δ_k(δ, h), while, for n=3, g_β^(3)(δ, h)=∫_0^2π dk/2π8C_β, k^2(1+C_β, k^2)sin^2Δ_k(δ, h)/(1+3C_β, k^2)^2. Notice that, in the limit β→∞, we recover the prediction for the ground state in Eq. (<ref>). In the infinite temperature limit, β=0, the Gibbs ensemble reduces to the (normalized) identity, ρ_β=0=2^-N I, which commutes with the charge Q, and, therefore, Δ S_A^(n)(δ, h, β=0)=0. However, if we take β=0 in the saddle point approximation (Eq. (<ref>)), the expression diverges. The reason is that the limits ℓ→∞ and β→ 0 do not commute. To obtain the behavior of the Rényi entanglement asymmetry at large temperatures, we can expand the expression (<ref>) for the charged moments around β=0 keeping ℓ finite. If we truncate the expansion at order O(β^2), then all the integrals can be straightforwardly calculated. We eventually find that, in the limit β→ 0, the Rényi entanglement asymmetry behaves for n=2 as Δ S_A^(2)(δ, h,β)≃ℓδ^2 β^2/8 and for n=3 Δ S_A^(3)(δ, h, β)≃3ℓδ^2β^2/16. In Fig. <ref>, we represent the n=2 Rényi entanglement asymmetry as a function of the inverse temperature β for different values of the couplings δ and h for a subsystem of length ℓ=80. The symbols are the exact entanglement asymmetry calculated using Eq. (<ref>), the solid curves correspond to the asymptotic expression of Eq. (<ref>), and the dashed curves have been obtained calculating exactly the Fourier transformation (Eq. (<ref>)) of the charged moments in Eq. (<ref>) without taking the saddle point approximation. We observe that the entanglement asymmetry is a monotonic decreasing function of the temperature that vanishes, as expected, in the limit β→0, at which the symmetry is recovered. §.§ Entanglement asymmetry after the quench We now move on to the evolution of the Rényi entanglement asymmetry after a quench to the XX spin chain from the Gibbs ensemble ρ_β of the XY Hamiltonian. When N→∞, the U(1) particle number symmetry initially broken by ρ_β is locally restored at t→∞. In fact, in that limit, the time dependent term e^i 2t ϵ_k σ^y in the symbol (<ref>) of the time-evolved two-point correlation matrix averages to zero and 𝒢_β(k,t→∞)= C_β, kcosΔ_k σ^z. If we calculate the charged moments using this symbol and applying Eqs. (<ref>) and (<ref>), we find that Z_n(α,β, t→∞)=Z_n(0, β, t→∞), which implies that the Rényi entanglement asymmetry vanishes at t→∞. To derive the exact full time evolution of Z_n(α, β, t) after the quench, we can proceed as explained in Sec. <ref> for the case of gain and loss dissipation and resort to the quasiparticle picture. According to it, the time evolved charged moments are given by Eq. (<ref>) upon replacing the symbol 𝒢(k, t→ 0) in the matrix ℳ_α^(n)(k,t→0) by the finite temperature symbol 𝒢_β(k,t=0) at time zero. Therefore, in the hydrodynamic limit t,ℓ→∞ with t/ℓ fixed, Z_n(α, β, t)=Z_n(0, β, t) e^B_n(α, β, ζ)ℓ. The explicit expression of B_n(α,β, ζ) for n=2 is B_2(α, β, ζ)=∫_0^2π d k/4π x_k(ζ) log[1-4sin^2Δ_k sin^2α/(C_β, k^-1+C_β, k)^2], and for n=3 B_3(α, β, ζ)=∫_0^2π dk/4π x_k(ζ)log[1 - 4 C_β, k^2(1 + C_β, k^2) sin^2Δ_k ∑_j=1^3sin^2α_j, j+1 - 16 i cosΔ_ksin^2Δ_k C_β, k^3 ∏_j=1^3sinα_j, j+1/(1 + 3 C_β, k^2)^2]. As we have already explained, the term x_k(ζ) in B_n(α, β, t) accounts for the number of entangled pairs of quasiparticles created at the quench and with opposite velocity ± v_k that are inside the subsystem A at time t. The other term in the integrand is the contribution of these excitations to the ratio Z_n(α, β, t)/Z_n(0,β, t) which, since in this case the evolution is unitary, is determined by their value at t=0, compare Eqs. (<ref>) and (<ref>) with Eqs. (<ref>) and (<ref>). Inserting Eq. (<ref>) in Eq. (<ref>), we obtain the time evolution of the Rényi entanglement asymmetry after the quench. In Fig. <ref>, we represent the Rényi entanglement asymmetry for n=2 as a function of time for different values of the temperature in the initial chain, β=5, 2.5 and 1, taking three different sets of couplings (δ, h) of the XY Hamiltonian. The solid curves have been obtained with the quasiparticle prediction for the charged moments as in Eq. (<ref>) and the symbols are the exact value of the entanglement asymmetry calculated numerically using the determinant formula in Eq. (<ref>). For comparison, we also include as dashed lines the time evolution from the corresponding ground states, that is, the limit β→∞. The pairs of couplings (0.5, 0.6) and (0.5, 1.2) or (0.6, 1.1) satisfy the conditions of Eq. (<ref>) and, in a quench from the corresponding ground states to the XX spin chain, their Rényi entanglement asymmetries intersect at a finite time, indicating the occurrence of the QMPE. On the other hand, the couplings (0.5, 1.2) and (0.6, 1.1) violate Eqs. (<ref>) and their ground state entanglement asymmetries do not intersect; therefore, they do not show QMPE. When quenching from a chain at finite temperature, we observe that the QMPE still occurs at low enough temperatures for those pairs of couplings for which there is QMPE at zero temperature (left panel of Fig. <ref>). However, as we take a larger temperature, the difference between their entanglement asymmetries at late times progressively decreases. There is a specific temperature from which the entanglement asymmetries do not cross anymore and the QMPE disappears. To clarify this phenomenology and establish the conditions for the occurrence of the QMPE in the quench from the Gibbs ensemble, let us analyze the large time behavior of the Rényi entanglement asymmetry as in the previous sections. Again, in the coefficient B_n(α, β, ζ) of the charged moments, the term x_k(ζ) vanishes for 2ζ|v_k|>1 and, therefore, only the modes with the slowest group velocity, those around k=0 and π, contribute when we approach the equilibrium. We can then expand the logarithm function in Eqs. (<ref>) and (<ref>) around k=0 and π. Since B_n(α, β, ζ)→ 0 at ζ→∞, we can further take the Taylor expansion of the exponential function in Eq. (<ref>) and, truncating it at the first order term, calculate exactly the integral in α, as we have done in Eq. (<ref>) for the case of gain and loss dissipation. We eventually find Δ S_A^(n)(t)≃n ℓ/n-1∫_-k^*_ζ^k^*_ζ dk/4π (1-2ζ|v_k|)Υ_β, k^(n)(δ, h), where the function Υ_β, k^(n)(δ, h) for n=2 is Υ_β,k^(2)(δ, h)=sin^2Δ_k(δ, h)/(C_β, k(δ, h)^-1+C_β, k(δ, h))^2 +sin^2Δ_k(δ, -h)/(C_β, k(δ, -h)^-1+C_β, k(δ, -h))^2, and for n=3 Υ_β, k^(3)(δ, h)=C_β, k(δ, h)^2(1+C_β, k(δ, h)^2)sin^2Δ_k(δ, h)/(1+3 C_β, k(δ, h)^2)^2 +C_β, k(δ, -h)^2(1+C_β, k(δ, -h)^2)sin^2Δ_k(δ, -h)/(1+3 C_β, k(δ, -h)^2)^2. The function Υ_β,k^(n)(δ, h) represents the contribution of the slowest modes to the entanglement asymmetry at large times. As clear from Eqs. (<ref>) and (<ref>), its explicit expression depends on the Rényi index n in a very non-trivial way that we have not been able to disentangle. In the zero temperature limit β→∞, Υ_β, k^(n)(δ, h) tends to Υ_k(δ, h), which was defined in Eq. (<ref>) and is independent of n. Let us now consider two different initial Gibbs ensembles ρ_β(δ_1, h_1) and ρ_β(δ_2, h_2) for which Δ S_A^(n)(δ_1, h_1, β)>Δ S_A^(n)(δ_2, h_2,β). If we plug the asymptotic expression of their entanglement asymmetries at zero time, Eq. (<ref>), in the previous inequality and the one at late times, Eq. (<ref>), in the condition (<ref>) for the QMPE, we conclude that these are satisfied if and only if {[ g_β^(n)(δ_1, h_1)> g_β^(n)(h_2, δ_2),; ; ∫_-k^*_ζ^k^*_ζ d k/2πΥ_β, k^(n)(δ_1, h_1) <∫_-k^*_ζ^k^*_ζ d k/2πΥ_β, k^(n)(δ_2, h_2), ]. for t>t_I. The first striking feature of this result is that, due to the intricate dependence of g_β^(n)(δ, h) and Υ_β, k^(n)(δ, h) on the Rényi index n, the conditions in Eq. (<ref>) may be satisfied for some n but not for others. To see this, we can take two XY spin chains whose ground states show QMPE, thus Δ S_A^(n)(δ_1, h_1, β→∞, t)<Δ S_A^(n)(δ_2, h_2, β→∞, t) for t>t_I, and then obtain the initial temperature β_ M for which their large time entanglement asymmetries equate, Δ S_A^(n)(δ_1, h_1, β_ M, t)=Δ S_A^(n)(δ_2, h_2, β_ M, t). Solving that equation using the large time expression of Eq. (<ref>), or by equating the second condition in Eq. (<ref>), is generally difficult. A much easier way is by rewriting Eq. (<ref>) in a more explicit form as a function of the couplings h, δ and the inverse temperature β of the initial chain. In fact, expanding in Eq. (<ref>) the velocity v_k and Υ_β, k(δ, h) around k=0 and k^*_ζ as k^*_ζ≃ 1/(2ζ), we obtain that the Rényi entanglement asymmetry decays as ℓ^4/t^3 from any Gibbs ensemble ρ_β. In particular, for n=2, Δ S_A^(2)(t)≃ℓδ^2 /384 (-1 + h^2)^2 πζ^3[(1 + h)^2 . .×tanh^2((-1 + h)β)+ (-1 + h)^2 tanh^2((1 + h) β)]. and, for n=3, Δ S_A^(3)(t)≃ℓδ^2/512(1-h^2)^2πζ^3[2+2h^2. .-(1+h)^2/(1-2cosh((-1+h)β))^2 -(-1+h)^2/(1-2cosh((1+h)β))^2]. Let us take now the XY spin chain along the curve δ^2+h^2=1. As we already mentioned, this family of chains is peculiar because their ground states are the tilted ferromagnetic configurations, which have been the prototypical instance of initial pure states to investigate the QMPE theoretically <cit.> and experimentally <cit.>. For h≥ 0, any pair of ground states in this curve shows Mpemba effect. Therefore, we can take δ_1=1, h_1=0, whose ground state breaks the most the symmetry along the curve, and find the β_M that solves the equation Δ S_A^(n)(1, 0, β_ M, t)=Δ S_A^(n)(δ, √(1-δ^2), β_ M, t) using Eq. (<ref>) for n=2 and Eq. (<ref>) for n=3. In Fig. <ref>, we represent the values of β_ M that we obtain as a function of h. According to this plot, for temperatures β>β_ M and fixed n, the entanglement asymmetries in a quench from h_1=0, δ_1=1 and another chain with δ<1 and h=√(1-δ^2) always cross and there is QMPE, while for β<β_ M, this phenomenon disappears. As we can see, β_ M depends on the Rényi index n, implying that there exists an interval of β for which the different Rényi entanglement asymmetries give an opposite answer as to whether the Mpemba effect occurs. We conclude this section by proving that the QMPE effect disappears when the temperature of the chains is very large for any pair of couplings h, δ. In Eqs. (<ref>) and (<ref>), we obtained the expression for the n=2 and 3 Rényi entanglement asymmetries at large temperatures before the quench. The late time behavior of the entanglement asymmetries for large temperatures can be derived by expanding Eqs. (<ref>) and (<ref>) around β=0. We find Δ S_A^(2)(δ, h, β, t)≃ℓδ^2 β^2/192πζ^3 for n=2 and Δ S_A^(3)(δ, h, β, t)≃ℓδ^2β^2/128 πζ^3 when n=3. We observe that both at t=0 and t→∞ the entanglement asymmetries do not depend on the initial external magnetic field h and they grow monotonically with the anisotropy parameter δ. Therefore, in this case, the entanglement asymmetries of two different initial Gibbs ensembles ρ_β(δ_1, h_1) and ρ_β(δ_2, h_2) never satisfy the conditions of Eqs. (<ref>) and (<ref>) for the occurrence of the QMPE. § CONCLUSIONS Quantum systems are commonly described as mixed states, especially when they are subject to manipulations and interactions with the external environment. For instance, this is what happens in the observation of the QMPE in the trapped-ion quantum simulator studied in Ref. <cit.>. For this reason, the goal of this manuscript has been to investigate how the mixedness of a state affects the symmetry restoration of a U(1) symmetry and the occurrence of the QMPE. We have tackled this question by considering three distinct cases. In Sec. <ref>, we have explored a scenario where an initially pure state evolves into a mixed state due to gain and loss dissipation. It turns out that, if gain and loss are balanced, they do not alter the conditions for the occurrence of the QMPE compared to the unitary case (see Fig. <ref>). The only difference is that the entanglement asymmetry decays exponentially in time, rather than algebraically. However, generic gain and loss terms can affect the occurrence of the QMPE as they shift the instant at which the crossing of the entanglement asymmetries happens. These findings are supported by numerical calculations as well as analytical predictions, derived by extending the quasiparticle picture to the weakly-dissipative hydrodynamic regime. One important difference with respect to the unitary evolution is that, in the presence of gain and loss dissipation, the stationary state of the subsystem does not depend on its initial state. This is analogous to the classical Mpemba effect, in which the final equilibrium state is the same for all the initial conditions. A very similar phenomenology is observed in the presence of local dephasing, i.e. local rotations of the spins around the z-axis (see Fig. <ref>). Indeed, the main lesson we learn from Sec. <ref> is that this noise makes the entanglement asymmetry decrease and can shift the crossing time at which the QMPE occurs towards larger times as the dephasing rate increases, depending on the pair of initial states considered. To this end, we have also developed the equations of motion for the two-point functions in generic systems of free fermions with dephasing. The third scenario that we have considered is a global quantum quench from a system at a finite temperature, which is described by the Gibbs ensemble. In this case, the origin of the mixedness of the state is the configuration at time t=0, while the dynamics is purely unitary. In Sec. <ref>, we have first studied the entanglement asymmetry in the initial system and we have analyzed how it behaves between the ground state and the infinite temperature limit, where the symmetry is recovered. After the quench, we have observed that it exists a critical temperature from which the QMPE disappears (see Fig. <ref>). Even though its value depends on the specific Rényi index of the entanglement asymmetry, we expect that the critical temperature increases as n increases. There are several hints for future directions one could explore starting from our manuscript. For instance, a non-unitary dynamics can arise from local measurements performed during the time evolution, followed by post-selecting specific measurement outcomes. The action of this non-unitary evolution on the entanglement entropy has been studied in Ref. <cit.> for the XY model (<ref>) and it would be interesting to compare the effect of measurements with the one induced by dissipation. Unlike the entanglement entropy, which is not a good measure of entanglement in mixed states, the entanglement asymmetry does quantify how much these states break a symmetry; nevertheless, it would be interesting to explore if one can define a probe of symmetry breaking based on entanglement measures of mixed states, such as entanglement negativity <cit.>. An analytical challenge that our work leaves open is understanding why the charged moments defined in Eq. (<ref>) do not factorize in the replica space, as happens in the absence of dissipation or at zero temperature. This prevents us from finding an analytical expression of the asymmetry for a generic index n and taking the replica limit. Finally, even though so far we have focused on free systems, a natural extension of our findings would be studying the entanglement asymmetry under a non-unitary dynamics or with an initial thermal state in interacting systems. §.§ Acknowledments We thank Vincenzo Alba, Fabio Caceffo, Pasquale Calabrese and Colin Rylands for useful discussions and collaborations on related topics. FA acknowledges support from ERC under Consolidator Grant number 771536 (NEMO). SM thanks the support from the Caltech Institute for Quantum Information and Matter and the Walter Burke Institute for Theoretical Physics at Caltech. VV acknowledge support from the French National Research Agency via QUBITAF (ANR-22-PETQ-0004, Plan France 2030). unsrt amc-23 F. Ares, S. Murciano, and P. Calabrese, Entanglement asymmetry as a probe of symmetry breaking, https://doi.org/10.1038/s41467-023-37747-8Nat. Comms. 14, 2036 (2023). mpemba-69 E. B. Mpemba and D. G. Osborne, Cool? https://iopscience.iop.org/article/10.1088/0031-9120/4/3/312Phys. Educ. 4, 172 (1969). ahn16 Y. H. Ahn, H. Kang, D. Y. Koh, and H. Lee, Experimental verifications of Mpemba-like behaviors of clathrate hydrates, https://link.springer.com/article/10.1007/s11814-016-0029-2Korean Jour. of Chem. Engin. 33, 1903 (2016). hu18 C. Hu, J. Li, S. Huang, H. Li, C. Luo, J. Chen, S. Jiang, and L. An, Conformation Directed Mpemba Effect on Polylactide, https://doi.org/10.1021/acs.cgd.8b01250Crystallization, Cryst. Growth Des. 18, 5757 (2018). chaddah10 P. Chaddah, S. Dash, K. Kumar, and A. Banerjee, Overtaking while approaching equilibrium, https://doi.org/10.48550/arXiv.1011.3598arXiv:1011.3598. greaney11 A. Greaney, G. Lani, G. Cicero, and J. C. Grossman, Mpemba-Like Behavior in Carbon Nanotube Resonators, https://link.springer.com/article/10.1007/s11661-011-0843-4Metal. and Mat. Trans. A 42, 3907 (2011). lasanta17 A. Lasanta, F. Vega Reyes, A. Prados, and A. Santos, When the Hotter Cools More Quickly: Mpemba Effect in Granular Fluids, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.148001Phys. Rev. Lett. 119, 148001 (2017). keller18 T. Keller, V. Torggler, S. B. Jäger, S. Schutz, H. Ritsch, and G. Morigi, Quenches across the self-organization transition in multimode cavities, https://iopscience.iop.org/article/10.1088/1367-2630/aaa161New J. Phys. 20, 025004 (2018). lr-17 Z. Lu and O. Raz, Nonequilibrium thermodynamics of the Markovian Mpemba effect and its inverse, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5441807/PNAS 114, 5083 (2017). krhv-19 I. Klich, O. Raz, O. Hirschberg, and M. Vucelja, The Mpemba index and anomalous relaxation, https://doi.org/10.1103/PhysRevX.9.021060Phys. Rev. X 9, 021060 (2019). wv-22 M. R. Walker and M. Vucelja, Mpemba effect in terms of mean first passage time, https://arxiv.org/abs/2212.07496arXiv:2212.07496. tyr-23 G. Teza, R. Yaacobu, and O. Raz, Relaxation shortcuts through boundary coupling, https://doi.org/10.1103/PhysRevLett.131.017101Phys. Rev. Lett. 131, 017101 (2023). wbv-23 M. R. Walker, S. Bera, and M. Vucelja, Optimal transport and anomalous thermal relaxations, https://doi.org/10.48550/arXiv.2307.16103arXiv:2307.16103. bwv-23 S. Bera, M. R. Walker, and M. Vucelja, Effect of dynamics on anomalous thermal relaxations and information exchange, https://doi.org/10.48550/arXiv.2308.04557arXiv:2308.04557. brp-23 A. Biwas, R. Rajesh, and A. Pal, Mpemba effect in a Langevin system: Population statistics, metastability, and other exact results, https://doi.org/10.1063/5.0155855J. Chem. Phys. 159, 044120 (2023). bp-24 A. Biswas and A. Pal, Mpemba effect on non-equilibrium active Markov chains, https://doi.org/10.48550/arXiv.2403.17547arXiv.2403.17547. kb-20 A. Kumar and J. Bechhoefer, Exponentially faster cooling in a colloidal system, https://www.nature.com/articles/s41586-020-2560-xNature 584, 64 (2020). kcb-22 A. Kumar, R. Chétrite, and J. Bechhoefer, Anomalous heating in a colloidal system, https://doi.org/10.1073/pnas.2118484119PNAS 119, e2118484119 (2022) makc-23 S. Murciano, F. Ares, I. Klich, and P. Calabrese, Entanglement asymmetry and quantum Mpemba effect in the XY spin chain, https://doi.org/10.1088/1742-5468/ad17b4J. Stat. Mech. (2024) 013103. yac-24 S. Yamashika, F. Ares, and P. Calabrese, Entanglement asymmetry and quantum Mpemba effect in two-dimensional free-fermion systems, https://doi.org/10.48550/arXiv.2403.04486arXiv.2403.04486. carc-24 K. Chalas, F. Ares, C. Rylands, and P. Calabrese, Multiple crossing during dynamical symmetry restoration and implications for the quantum Mpemba effect, https://doi.org/10.48550/arXiv.2405.04436arXiv:2405.04436 rkacmb-23 C. Rylands, K. Klobas, F. Ares, P. Calabrese, S. Murciano, and B. Bertini, Microscopic origin of the quantum Mpemba effect in integrable systems, https://arxiv.org/abs/2310.04419arXiv:2310.04419. bkccr-23 B. Bertini, K. Klobas, M. Collura, P. Calabrese, and C. Rylands Dynamics of charge fluctuations from asymmetric initial states, https://doi.org/10.48550/arXiv.2306.12404arXiv:2306.12404. liu_mpemba_circuit-24 S. Liu, H.-K. Zhang, S. Yin, and S.-X. Zhang, Symmetry restoration and quantum Mpemba effect in symmetric random circuits, https://doi.org/10.48550/arXiv.2403.08459arXiv:2403.08459. joshi-24 L. Kh Joshi, J. Franke, A. Rath, F. Ares, S. Murciano, F. Kranzl, R. Blatt, P. Zoller, B. Vermersch, P. Calabrese, C. F. Roos, and M. K. Joshi, Observing the quantum Mpemba effect in quantum simulations, https://arxiv.org/abs/2401.04270arXiv:2401.04270. quantum1 A. Nava and M. Fabrizio, Lindblad dissipative dynamics in the presence of phase coexistence, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.100.125102Phys. Rev. B 100, 125102 (2019). quantum2 S. Kochsiek, F. Carollo, and I. Lesanovsky, Accelerating the approach of dissipative quantum spin systems towards stationarity through global spin rotations, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.106.012207Phys. Rev. A 106, 012207 (2022). quantum3 F. Carollo, A. Lasanta, and I. Lesanovsky, Exponentially Accelerated Approach to Stationarity in Markovian Open Quantum Systems through the Mpemba Effect, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.127.060401Phys. Rev. Lett. 127, 060401 (2021). quantum4 S. K. Manikandan, Equidistant quenches in few-level quantum systems, https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.3.043108Phys. Rev. Research 3, 043108 (2021). quantum5 F. Ivander, N. Anto-Sztrikacs, and D. Segal, Hyper-acceleration of quantum thermalization dynamics by bypassing long-lived coherences: An analytical treatment, https://doi.org/10.1103/PhysRevE.108.014130Phys. Rev. E 108, 014130 (2023). quantum6 A. K. Chatterjee, S. Takada, and H. Hayakawa, Quantum Mpemba effect in a quantum dot with reservoirs, https://doi.org/10.1103/PhysRevLett.131.080402Phys. Rev. Lett. 131, 080402 (2023). cth-23-2 A. K. Chatterjee, S. Takada, and H. Hayakawa, Multiple quantum Mpemba effect: exceptional points and oscillations, https://doi.org/10.48550/arXiv.2311.01347arXiv:2311.01347. spc-24 D. J. Strachan, A. Purkayastha, and S. R. Clark, Non-Markovian Quantum Mpemba effect, https://doi.org/10.48550/arXiv.2402.05756arXiv:2402.05756. mczg-24 M. Moroder, O. Culhane, K. Zawadzki, and J. Goold, The thermodynamics of the quantum Mpemba effect, https://doi.org/10.48550/arXiv.2403.16959arXiv:2403.16959. shapira-24 S. A. Shapira, Y. Shapira, J. Markov, G. Teza, N. Akerman, O. Raz, and R. Ozeri, The Mpemba effect demonstrated on a single trapped ion qubit, https://doi.org/10.48550/arXiv.2401.05830arXiv:2401.05830. zhang-exp-24 J. Zhang, G. Xia, C.-W. Wu, T. Chen, Q. Zhang, Y. Xie, W.-B. Su, W. Wu, C.-W. Qiu, P.-X. Chen, W. Li, H. Jing, and Y.-L. Zhou, Observation of quantum strong Mpemba effect, https://doi.org/10.48550/arXiv.2401.15951arXiv:2401.15951. cma-24 F. Caceffo, S. Murciano, and V. Alba, Entangled multiplets, asymmetry, and quantum Mpemba effect in dissipative systems, https://arxiv.org/abs/2402.02918arXiv:2402.02918. amvc-23 F. Ares, S. Murciano, E. Vernier, and P. Calabrese, Lack of symmetry restoration after a quantum quench: an entanglement asymmetry study, https://doi.org/10.21468/SciPostPhys.15.3.089SciPost Phys. 15, 089 (2023). fac-23 F. Ferro, F. Ares, and P. Calabrese, Non-equilibrium entanglement asymmetry for discrete groups: the example of the XY spin chain, http://dx.doi.org/10.1088/1742-5468/ad138fJ. Stat. Mech. (2024) 023101. cm-23 L. Capizzi, and M. Mazzoni, Entanglement asymmetry in the ordered phase of many-body systems: the Ising Field Theory, https://doi.org/10.1007/JHEP12(2023)144JHEP 12 (2023) 144. cv-23 L. Capizzi and V. Vitale, A universal formula for the entanglement asymmetry of matrix product states, https://doi.org/10.48550/arXiv.2310.01962arXiv:2310.01962. fadc-24 M. Fossati, F. Ares, J. Dubail, and P. Calabrese, Entanglement asymmetry in CFT and its relation to non-topological defects, https://doi.org/10.1007/JHEP05 chen-23 M. Chen and H. Chen Rènyi entanglement asymmetry in 1+1-dimensional conformal field theories, https://doi.org/10.1103/PhysRevD.109.065009Phys. Rev. D 109, 065009 (2024). ampc-23 F. Ares, S. Murciano, L. Piroli, and P. Calabrese, An entanglement asymmetry study of black hole radiation, https://doi.org/10.48550/arXiv.2311.12683arXiv:2311.12683. Khor-23 B. J. J. Khor, D. M. Kürkçüoglu, T. J. Hobbs, G. N. Perdue, and I. Klich, Confinement and Kink Entanglement Asymmetry on a Quantum Ising Chain, https://doi.org/10.48550/arXiv.2312.08601arXiv:2312.08601. lr-14 N. Laflorencie and S. Rachel, Spin-resolved entanglement spectroscopy of critical spin chains and Luttinger liquids, http://dx.doi.org/10.1088/1742-5468/2014/11/P11013J. Stat. Mech. (2014) P11013. gs-18 M. Goldstein and E. Sela, Symmetry-Resolved Entanglement in Many-Body Systems, http://dx.doi.org/10.1103/PhysRevLett.120.200602Phys. Rev. Lett. 120, 200602 (2018). xavier J. C. Xavier, F. C. Alcaraz, and G. Sierra, Equipartition of the entanglement entropy, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.98.041106Phys. Rev. B 98, 041106 (2018). brc-19 R. Bonsignori, P. Ruggiero, and P. Calabrese, Symmetry resolved entanglement in free fermionic systems, https://doi.org/10.1088/1751-8121/ab4b77J. Phys. A 52, 475302 (2019). mdgc-20 S. Murciano, G. Di Giulio, and P. Calabrese, Entanglement and symmetry resolution in two dimensional free quantum field theories, https://link.springer.com/article/10.1007/JHEP08(2020)073JHEP 08 (2020) 073. pbc-21 G. Parez, R. Bonsignori, and P. Calabrese, Quasiparticle dynamics of symmetry resolved entanglement after a quench: the examples of conformal field theories and free fermions, https://doi.org/10.1103/PhysRevB.103.L041104Phys. Rev. B 103, L041104 (2021). bcckr-23 B. Bertini, P. Calabrese, M. Collura, K. Klobas, and C. Rylands, Nonequilibrium Full Counting Statistics and Symmetry-Resolved Entanglement from Space-Time Duality, https://doi.org/10.1103/PhysRevLett.131.140401Phys. Rev. Lett. 131, 140401 (2023). mca-23 S Murciano, P. Calabrese, and V. Alba, Symmetry-resolved entanglement in fermionic systems with dissipation https://doi.org/10.1088/1742-5468/ad0224J. Stat. Mech. (2023) 113102. lukin-19 A. Lukin, M. Rispoli, R. Schittko, M. E. Tai, A. M. Kaufman, S. Choi, V. Khemani, J. Leonard, and M. Greiner, Probing entanglement in a many-body localized system, https://dx.doi.org/10.1126/science.aau0818Science 364, 6437 (2019). azses-20 D. Azses, R. Haenel, Y. Naveh, R. Raussendorf, E. Sela, and E. G. Dalla Torre, Identification of Symmetry-Protected Topological States on Noisy Quantum Computers, https://doi.org/10.1103/PhysRevLett.125.120502Phys. Rev. Lett. 125, 120502 (2020). neven-21 A. Neven, J. Carrasco, V. Vitale, C. Kokail, A. Elben, M. Dalmonte, P. Calabrese, P. Zoller, B. Vermersch, R. Kueng, and B. Kraus, Symmetry-resolved entanglement detection using partial transpose moments, https://doi.org/10.1038/s41534-021-00487-ynpj Quantum Info. 7, 1 (2021). vitale2022symmetry V. Vitale, A. Elben, R. Kueng, A. Neven, J. Carrasco, B. Kraus, P. Zoller, P. Calabrese, B. Vermersch, and M. Dalmonte, Symmetry-resolved dynamical purification in synthetic quantum matter, https://doi.org/10.21468/SciPostPhys.12.3.106SciPost Phys. 12, 106 (2022). rvm-22 A. Rath, V. Vitale, S. Murciano, M. Votto, J. Dubail, R. Kueng, C. Branciard, P. Calabrese, and B. Vermersch, Entanglement barrier and its symmetry resolution: theory and experiment, https://doi.org/10.1103/PRXQuantum.4.010318PRX Quantum 4, 010318 (2023). brydges2019probing T. Brydges, A. Elben, P. Jurcevic, B. Vermersch, C. Maier, B. P. Lanyon, P. Zoller, R. Blatt, and C. F. Roos, Probing Rényi entanglement entropy via randomized measurements, https://science.sciencemag.org/content/364/6437/260Science 364, 260 (2019). elben2020mixed A. Elben, R. Kueng, H. Y. R. Huang, R. van Bijnen, C. Kokail, M. Dalmonte, P. Calabrese, B. Kraus, J. Preskill, P. Zoller, and B. Vermersch, Mixed-State Entanglement from Local Randomized Measurements, https://link.aps.org/doi/10.1103/PhysRevLett.125.200501 Phys. Rev. Lett. 125, 200501 (2020). elben2023 A. Elben, S. T. Flammia, H. Huang, R. Kueng, J. Preskill, B. Vermersch, and P. Zoller, The randomized measurement toolbox, https://www.nature.com/articles/s42254-022-00535-2Nat. Rev. Phys. 5, 9 (2023). satzinger2021 K. J. Satzinger, Y-J. Liu, A. Smith, C. Knapp, M. Newman, C. Jones, Z. Chen et al., Realizing topologically ordered states on a quantum processor, https://www.science.org/doi/full/10.1126/science.abi8378Science 374, 6572, 1237 (2021). Yu2021 Yu Xiao-Dong, S. Imai, and O. Gühne, Optimal entanglement certification from moments of the partial transpose, https://doi.org/10.1103/PhysRevLett.127.060504Phys. Rev. Lett. 127, 060504 (2021). peschel2003 I. Peschel, Calculation of reduced density matrices from correlation functions, https://doi.org/10.1088/0305-4470/36/14/101J. Phys. A: Math. Gen. 36, L205 (2003). peschel2009 I. Peschel and V. Eisler, Reduced density matrices and entanglement entropy in free lattice models, https://iopscience.iop.org/article/10.1088/1751-8113/42/50/504003 J. Phys. A 42, 504003 (2009). cc-05 P. Calabrese and J. Cardy, Evolution of Entanglement Entropy in One-Dimensional Systems, https://doi.org/10.1088/1742-5468/2005/04/P04010J. Stat. Mech. (2005) P04010. ac-17 V. Alba and P. Calabrese, Entanglement and thermodynamics after a quantum quench in integrable systems, https://doi.org/10.1073/pnas.1703516114PNAS 114, 7947 (2017). ac-18 V. Alba and P. Calabrese, Entanglement dynamics after quantum quenches in generic integrable systems, https://scipost.org/10.21468/SciPostPhys.4.3.017SciPost Phys. 4, 017 (2018). bastianello2018 A. Bastianello and P. Calabrese, Spreading of entanglement and correlations after a quench with intertwined quasiparticles, https://scipost.org/10.21468/SciPostPhys.5.4.033SciPost Phys. 5, 033 (2018). bastianello2020 A. Bastianello and M. Collura, Entanglement spreading and quasiparticle picture beyond the pair structure, https://doi.org/10.21468/SciPostPhys.8.3.045SciPost Phys. 8, 045 (2020). caceffo2023 F. Caceffo and V. Alba, Negative tripartite mutual information after quantum quenches in integrable systems, https://link.aps.org/doi/10.1103/PhysRevB.108.134434Phys. Rev. B 108, 134434 (2023). petruccione2002the H. P. Breuer and F. Petruccione, The theory of open quantum systems, (Great Clarendon Street: Oxford University Press) (2002). alba2021spreading V. Alba and F. Carollo, Spreading of correlations in Markovian open quantum systems, https://doi.org/10.1103/PhysRevB.103.L020302Phys. Rev. B 103, 020302 (2021). carollo2022dissipative F. Carollo and V. Alba, Emergent dissipative quasi-particle picture in noninteracting Markovian open quantum systems, https://link.aps.org/doi/10.1103/PhysRevB.105.144305 Phys. Rev. B 105, 144305 (2022). alba2022hydrodynamics V. Alba and F. Carollo, Hydrodynamics of quantum entropies in Ising chains with linear dissipation, https://doi.org/10.1088/1751-8121/ac48ecJ. Phys. A: Math. Theor. 55, 074002 (2022). alba2022logarithmic V. Alba and F. Carollo, Logarithmic negativity in out-of-equilibrium open free-fermion chains: An exactly solvable case, https://doi.org/10.21468/SciPostPhys.15.3.124SciPost Phys. 15, 124 (2023). gge M. Rigol, V. Dunjko, V. Yurovsky, and M. Olshanii, Relaxation in a Completely Integrable Many-Body Quantum System: An Ab Initio Study of the Dynamics of the Highly Excited States of 1D Lattice Hard-Core Bosons, https://link.aps.org/doi/10.1103/PhysRevLett.98.050405Phys. Rev. Lett. 98, 050405 (2007). cfe-12 P. Calabrese, F. H. L. Essler, and M. Fagotti, Quantum Quench in the Transverse Field Ising Chain II: Stationary State Properties, https://doi.org/10.1088/1742-5468/2012/07/P07022J. Stat. Mech. (2012) P07022. fe-13 M. Fagotti and F. H.L. Essler, Reduced Density Matrix after a Quantum Quench, https://doi.org/10.1103/PhysRevB.87.245107Phys. Rev. B 87, 245107 (2013). gge2 F. H. L. Essler and M. Fagotti, Quench dynamics and relaxation in isolated integrable quantum spin chains, https://doi.org/10.1088/1742-5468/2016/06/064002J. Stat. Mech. (2016) 064002. vr-16 L. Vidmar and M. Rigol, Generalized Gibbs ensemble in integrable lattice models, https://doi.org/10.1088/1742-5468/2016/06/064007J. Stat. Mech. (2016) 064007. prosen2008 T. Prosen, Third quantization: a general method to solve master equations for quadratic open Fermi systems, https://doi.org/10.1088/1367-2630/10/4/043026New J. Phys. 10, 043026 (2008). Albadephasing2023 V. Alba, Free fermions with dephasing and boundary driving: Bethe Ansatz results, https://doi.org/10.48550/arXiv.2309.12978arXiv:2309.12978. Eislerdephasing2011 V. Eisler, Crossover between ballistic and diffusive transport: the quantum exclusion process, https://iopscience.iop.org/article/10.1088/1742-5468/2011/06/P06007/pdfJ. Stat. Mech. (2011) P06007. ktm-82 J. Kurmann, H. Thomas, and G. Müller, Antiferromagnetic long-range order in the anisotropic quantum spin chain, https://doi.org/10.1016/0378-4371(82)90217-5Physica A 112, 235 (1982). ms-85 G. Müller and R.E. Shrock, Implications of direct-product ground states in the one- di- mensional quantum XYZ and XY spin chains, https://doi.org/10.1103/PhysRevB.32.5845Phys. Rev. B 32, 5845 (1985). turkeshi2023 X.Turkeshi and Marco Schiró, Entanglement and Correlation Spreading in non-Hermitian Spin Chains, https://doi.org/10.1103/PhysRevB.107.L020403Phys. Rev. B 107, L020403 (2023). vw-02 G. Vidal and R. F. Werner, A computable measure of entanglement, https://doi.org/10.1103/PhysRevA.65.032314Phys. Rev. A 65, 032314 (2002). plenio-05 M. B. Plenio, Logarithmic Negativity: A Full Entanglement Monotone That is not Convex, https://doi.org/10.1103/PhysRevLett.95.090503Phys. Rev. Lett. 95, 090503 (2005).
http://arxiv.org/abs/2405.09941v1
20240516094351
Machine-Learning Enhanced Predictors for Accelerated Convergence of Partitioned Fluid-Structure Interaction Simulations
[ "Azzeddine Tiba", "Thibault Dairay", "Florian de Vuyst", "Iraj Mortazavi", "Juan-Pedro Berro Ramirez" ]
cs.CE
[ "cs.CE" ]
label1]TIBA Azzeddine label2,label5]DAIRAY Thibault label3]DE VUYST Florian label1]MORTAZAVI Iraj label4]BERRO RAMIREZ Juan-Pedro [label1]organization=M2N, CNAM, addressline=2, Rue Conté, city=Paris, postcode=75003, country=France [label2]organization=Manufacture Française des Pneumatiques Michelin, addressline=Place des Carmes-Dechaux, city=Clermont-Ferrand, postcode=63000, country=France [label5]organization=Centre Borelli, CNRS, Université Paris Saclay, ENS Paris Saclay, addressline=4, Avenue des Sciences, city=Gif-sur-Yvette, postcode=91190, country=France [label3]organization=Université de Technologie de Compiègne, CNRS, Laboratory of Biomechanics and Bioengineering, addressline=Rue du docteur Schweitzer, city=Compiègne, postcode=60203, country=France [label4]organization=Altair Engineering France, addressline=5, Rue de la Renaissance, city=Antony, postcode=92160, country=France Stable partitioned techniques for simulating unsteady fluid-structure interaction (FSI) are known to be computationally expensive when high added-mass is involved. Multiple coupling strategies have been developed to accelerate these simulations, but often use predictors in the form of simple finite-difference extrapolations. In this work, we propose a non-intrusive data-driven predictor that couples reduced-order models of both the solid and fluid subproblems, providing an initial guess for the nonlinear problem of the next time step calculation. Each reduced order model is composed of a nonlinear encoder-regressor-decoder architecture and is equipped with an adaptive update strategy that adds robustness for extrapolation. In doing so, the proposed methodology leverages physics-based insights from high-fidelity solvers, thus establishing a physics-aware machine learning predictor. Using three strongly coupled FSI examples, this study demonstrates the improved convergence obtained with the new predictor and the overall computational speedup realized compared to classical approaches. < g r a p h i c s > * Novel predictor to accelerate convergence of fluid-structure interactions problems. * Coupled solution of solid and fluid reduced models is used as the next initial guess. * Reduced models take the form of encoder-regressor-decoder data-driven models. * Online adaptation of the reduced models is used for more robust extrapolation. * Faster convergence and speedups up to 3.2 versus classical-predictor based coupling. Physically relevant reduced order model Fluid-structure interaction Partitioned approach ROM-FOM coupling Data-driven model Fixed-point acceleration. § INTRODUCTION Engineering applications involving Fluid-Structure Interaction (FSI) phenomena are numerous, and are present in various fields. This includes aeroelasticity, biomechanics, microfluidics, hydrodynamics and many more. Modeling these FSI problems is a challenging task due to the usually high complexity of the coupling between the solid system and the fluid system. Physics-based numerical simulations are considered as one of the leading options of modeling FSI problems, benefiting from years of advances in computational mechanics and from the increase of accessible computing power. These computations often have the goal of modeling the complicated coupling between the kinematics of a solid body and a fluid flow along with the mechanical loads associated with it. This results in very complicated problems, with nonlinearities present in both the fluid and the solid systems, in addition to the nonlinearities of the coupling itself. Furthermore, highly complex dynamics can be present due for example to the turbulent nature of the flow and/or to the interaction of the dynamical effects of the two systems involved. Different discretization methods exist for handling FSI problems, especially from the fluid side, to deal with moving bodies: we mention for example immersed boundary methods <cit.>, smoothed particle hydrodynamics (SPH) <cit.>, particle finite elements (PFEM) <cit.> and finite elements and finite volumes with the arbitrary Eulerian-Lagrangian formulation (ALE) <cit.> which is the method used in the present study. The approaches to solve FSI problems numerically can be classified commonly into two main classes: monolithic and partitioned methods. In the former, the governing equations of the fluid and the solid behavior are solved simultaneously, with the coupling conditions (equality of the forces, displacements and velocities at the interface) respected exactly <cit.>. However, computing the coupling terms (e.g the cross-derivatives in the Jacobian matrices) in the context of global Newton iterations can pose some serious computational problems. In addition to the expensive cost, it is very difficult to implement the monolithic schemes when different fluid and solid solvers are considered, especially if different discretization strategies are adopted for the two subproblems. This also becomes a significant disadvantage from a practical standpoint, in e.g an industrial context where there is a need for a non-intrusive combination of existing fluid and solid solvers. In the partitioned approach, each subproblem is solved separately, making it possible to use specialized solvers for the solid and the fluid, even when using different nonconforming grids on both sides and/or different discretization strategies on each domain (e.g. finite elements on the solid side vs SPH <cit.> or PFEM <cit.> on the fluid side). This constitutes one of the main advantages of this approach, explaining its popularity. The partitioned coupling is achieved thanks to the communication of relevant quantities between the solvers at the interface, namely the displacement, velocity and stress fields at the interface, which are then used to enforce the adequate boundary conditions on each problem. This is the main idea behind the Dirichlet-Neumann formulation for example, where the displacements and velocities stemming from the solid computations are enforced at the fluid boundary, and the forces resulting from the fluid stresses are imposed as a load on the solid domain. Unfortunately, an inherent instability appears when dealing with partitioned approaches: Due to the time delay that exists between the instants of solving the solid and the fluid problems, ensuring stability of the coupling conditions is not straightforward. In problems where the coupling is not very strong, i.e when the effect of one sub-problem on the other is much smaller than the effect of the other direction, coupling schemes can be made with one solver call per time step. This is commonly called staggered schemes or loosely coupled schemes. Extensive development of staggered schemes was done in early works <cit.>, where strategies were introduced with improved synchronization of the two subsystems<cit.>, improved order of accuracy <cit.>, and improved stability <cit.>, which generally involves clever choices of the time integrator in each solver, and a crucial choice of the good predictors of force or displacement at the beginning of each time step. Although the performance of such strategies was well demonstrated in aeroelastic applications with compressible flows, loosely coupled scheme perform much worse in problems with stronger coupling and incompressible flows. This is due to the added mass instability, a phenomenon related to the effect of the displaced (due to the solid displacement) fluid mass, on the solid itself. It was shown in <cit.> that this effect is more significant when the ratio of the added fluid mass on the solid mass increases, and that it also depends on the geometry of the two subdomains. It was also shown that, when the added mass is large, the partitioned schemes - even with full subiterations - can, at worst, fail to converge quickly, and at best, be conditionally stable, irrespective of the time step. In fact, smaller time steps may even cause worse convergence <cit.>. As a consequence, FSI problems involving light structures and a high added mass become computationally expensive, due to the large number of sub iterations needed at each time step or the very small time steps needed for segregated schemes. To deal with this, important work was done using various strategies, some of which is intrusive, like semi-implicit coupling with a segregated fluid solver <cit.>, others are somewhat intrusive, like the Robin-Neumann formulations <cit.>, adding artificial compressibility to the fluid system to ease convergence <cit.>, or enforcing coupling conditions through domain decomposition methods <cit.>. Other strategies can be implemented completely non-intrusively: One such approach is segregated Dirichlet-Neumann schemes with specific force predictors to deal with relatively high added-mass and small time steps <cit.>. Another approach - suitable for even higher added mass ratios - is using iterative schemes with convergence acceleration techniques. This includes either Aitken acceleration <cit.>, or quasi-newton (QN) acceleration methods for the FSI problem formulated as a fixed point problem <cit.> (see a review in <cit.>). The latter strategy is the one we focus on in this work thanks to its high flexibility for both problems with high added-mass and black-box coupling with existing solvers. We note that in all these partitioned schemes, there is often the notion of a predictor (for forces, displacements or velocities), the choice of which can be crucial for the acceleration of convergence. To reduce the computational cost associated with strongly coupled FSI simulations, some non-intrusive reduced order models (ROMs) have been developed in the past years. These ROMs are generally data-driven models that combine various Machine Learning (ML) methods to predict solutions in the time-parameter space. We mention for example the works in <cit.> where Proper Orthogonal Decomposition (POD) was combined with machine learning to predict parametric dynamical FSI solutions, or using spectral submanifolds for non-linearizable cases <cit.>. In a recent work <cit.>, we constructed data-driven ROMs for the solid subproblem intended to be coupled with high fidelity (HF) fluid full order models (FOMs), making thus a strong ROM-FOM coupling, capable of finding the FSI problem solutions with a much lower cost than the FOM-FOM while maintaining good accuracy. In the present work, we combine ideas of partitioned coupling acceleration and data-driven ROM-FOMs to construct more effective predictors for a faster convergence of iterative partitioned FSI schemes. Moreover, we propose a strategy to adaptively update these predictors using HF data generated from the fluid FOM that is still activated during online computations. In doing so, the proposed methodology leverages physics-based insights from the fluid solver, thus establishing a physics-aware machine learning predictor. This method was inspired by our previous work in <cit.> where the replacement of the solid FOM with a ROM, albeit reduces the computational cost, can contribute to slower convergence due to the ROM inaccuracies. Hence, equipping this ROM-FOM coupling with stronger predictors can ensure even greater speedups. Importantly, this enhanced predictor can be used with classical FOM-FOM FSI coupling as well. Other recent works have been focused on accelerating convergence of solving nonlinear problems using ML; interesting examples include <cit.> where ML techniques (PCA dimensionality reduction, neural networks and random forests) are used to obtain correct parameters of nonlinear numerical solutions (initial guess, pseudo time steps for pseudo-transient continuation and Aitken relaxation parameter). In a similar work <cit.>, although not using data-driven models, surrogate models (using e.g simplified faster models) have been utilized to obtain faster convergence of the Interface Quasi-Newton Inverse Least-Squares (IQN-ILS) method <cit.>. The novelty of the work presented here however is the use of data-driven techniques to accelerate convergence of partitioned FSI coupling. Particularly, we use POD dimensionality reduction, followed by two regression models approximating the load to displacement (solid solver) and the displacement to load (fluid solver) operators to construct fast ROMs that can be coupled in few iterations to produce an initial guess for the next time step problem. In addition, the fluid ROM can be updated using the HF data obtained online and thus keeps a satisfying accuracy. The predictor thus takes advantage of the available HF data in a smarter way than a simple extrapolator does. A brief illustrative explanation of the complete methodology is presented in Figure <ref>. In this study, we focus on solid models with neglected inertia, but a natural extension can be used for dynamical models, which we intend to pursue in future work. The remainder of this paper is structured as follows: In Sect. <ref>, the governing equations of FSI and the partitioned formulation are presented. Then, in Sect. <ref>, the proposed data-driven predictor approach is detailed. The results of the evaluation of this in terms of convergence acceleration are presented in Sect. <ref>: 3 test cases are used to demonstrate the efficiency of this approach. Finally, a conclusion is given in Sect. <ref>. § FOM-FOM FLUID-STRUCTURE INTERACTION BLACK-BOX COUPLING In FSI problems, the nonlinear global problem is expressed by both the dynamic and kinematic coupling conditions at the interface Γ_fsi: σ_f ·n = - σ_s ·n at Γ_fsi v = u̇ at Γ_fsi where σ_f and σ_s are the Cauchy stresses applied by the fluid and the solid respectively, v is the fluid velocity and u̇ is the solid velocity. In mesh-based methods for the fluid problem, the superposition of the two wet interfaces translates into an additionnal coupling condition x = u at Γ_fsi where x is the fluid mesh displacement field and u is the solid displacement. The Dirichlet-Neumann coupling formulation represents the two solvers as distinct operators that exchange their input and output at each iteration. We represent the fluid solver operator as ℱ: ℱ: ℝ^2N→ℝ^N ; (u_|Γ_fsi, u̇_|Γ_fsi) →f_|Γ_fsi where u_|Γ_fsi is the displacement field N is the number of interface degrees of freedom (e.g the number of interface grid points times the number of components in mesh-based methods) and f_|Γ_fsi represents the fluid viscous and pressure forces at Γ_fsi: f_|Γ_fsi = σ_f ·n_f|Γ_fsi. Similarly, the solid operator 𝒮 is defined as: 𝒮: ℝ^N →ℝ^2N ; f_|Γ_fsi→ (u_|Γ_fsi, u̇_|Γ_fsi) . In strongly coupled schemes, the coupling conditions are enforced at the iterations convergence. This nonlinear problem can be written as the fixed-point: (ℱ∘𝒮)(f_|Γ_fsi) = f_|Γ_fsi. In this work, we consider problems where the solid inertia is negligible compared to the fluid one, i.e the solids studied here have negligible mass and are solved under quasi-static load. These assumptions have been relevant in modeling many FSI problems, like collapsible channels <cit.>. Together with the flow incompressiblity, this means that the FSI coupling in theses cases is very strong and that the added mass is extremely large, as shown in <cit.>. Standard Gauss-Seidel will not converge in the majority of such cases. Otherwise, QN-accelerated iterative schemes will take many iterations for each time step, making the potential acceleration provided by the new predictor even more advantageous. Moreover, given that only elastic (albeit nonlinear) material laws are used, the solid problem considered in this work is hence path-independent. We note that the quasi-static load setting means that no velocity field is computed from the solid solver, and the mesh velocity can be computed using a proper time differentiation scheme <cit.> in the ALE mesh motion solver (usually part of the fluid solver). Consequently, the Dirichlet-Neumann formulation (<ref>) becomes: ℱ: ℝ^N→ℝ^N ; u_|Γ_fsi→f_|Γ_fsi 𝒮: ℝ^N →ℝ^N ; f_|Γ_fsi→u_|Γ_fsi . In the following, for clarity, we will remove the interface subscript from f_|Γ_fsi and u_|Γ_fsi. In order to pass the field of interest at the interface, a mapping should be done to interpolate between the two grids. Energy-conserving mapping methods should be performed in the case of non-matching grids between the two systems. In the paper, we represent the mapping operators as ℳ_ℱ→𝒮 (from fluid grid to solid grid) and ℳ_𝒮→ℱ (from solid grid to fluid grid). In the presented cases here, all the grids are matching on the interface, and the nearest-neighbor method is used. The reader is referred to <cit.> for additional details on mapping algorithms. In the rest of the paper, each field will be represented on the grid defined for the solver from which it is computed, i.e u on the solid mesh and f on the fluid mesh, otherwise, proper subscripts will indicate if they are defined elsewhere, using the notations: f_ℱ, f_𝒮, u_ℱ and u_𝒮. §.§ Coupling scheme To solve (<ref>), an iterative solution can be found by calling the 2 solvers sequentially at each iteration k: f^k = ℱ(u^k) u^k+1 = 𝒮(f^k) . These are sometimes called Picard iterations, and the scheme is referred to as the Gauss-Seidel scheme. Note that, depending on which subproblem is computed first, the unknown may change: if the fluid solver is called first, (<ref>) is replaced with the -generally- equivalent formulation: (𝒮∘ℱ)(u) = u. In this work, we use (<ref>) because: * Force predictors (and (<ref>)) were reported to be more efficient and result in fewer iterations than displacement predictors (and (<ref>)) <cit.>. * Although out of the scope of this work, when dynamical solutions are solved in the solid problem, enforcing the kinematic coupling conditions at the interface means that the velocity field needs to be passed from the solid solver to the fluid solver, especially in black-box coupling when there is no knowledge about the time integration scheme in the solid solver. In this context, it is easier to accelerate the fluid forces field than it is for the displacement, where careful considerations should be done and/or additional cost to update the velocity field is spent. * The accuracy of the regression models of our data-driven ROMs will be higher when these models are trained on updated/relaxed force fields compared to when they are trained on non-relaxed fields. This is due to the low variance of the data points when they correspond to relaxed fields. This point will be highlighted with a numerical experiment in the results Section <ref>. Equation (<ref>) can also be rewritten as a block fixed-point system ([ 𝒮∘ℱ 0; 0 ℱ∘𝒮 ]) ([ u; f ]) = ([ u; f ]). Alternatively, a parallel formulation of the fixed-point can be sought if we write it as a Jacobi system: ([ 0 𝒮; ℱ 0 ]) ([ u; f ]) = ([ u; f ]) which allows the simultaneous solution of the two solvers in parallel. While the ROM strategy we present in this work can be equivalently used in all these different formulations, we will focus here on the Gauss-Seidel system (<ref>). The FSI fixed-point iterations to find the solution of (<ref>) are called henceforth the global iterations. §.§ Convergence acceleration The simple use of Picard iterations, where the resultant forces field (f) is passed directly as an input to the solid solver (f = f), shows poor convergence or may diverge in strongly coupled problems due to the added-mass instabilities <cit.>. To remedy this, convergence acceleration is done at the end of each iteration. A simple method is to use Aitken acceleration <cit.>, where at the end of iteration k, f_|Γ_fsi is modified as: f^k = w^k f^k + (1-w^k)f^k-1 where w^k = - w^k-1r^k-1 T(r^k-r^k-1)/||r^k-r^k-1||_2^2 and r^k is the residual at iteration k : r^k = f^k - f^k-1 Another approach is to formulate the problem (<ref>) as a nonlinear problem to be solved using Quasi-Newton (QN) iterations: ℛ(x) = 0 where ℛ(·) represents ℛ(f) = (ℱ∘𝒮)(f) - f in the case of (<ref>) and ℛ(u) = (𝒮∘ℱ)(u) - u in the case of (<ref>). Then, at iteration k, instead of passing the solver output x^k directly, a QN relaxation is computed as x^k = x^k-1 - J^-1r^k where r^k = x^k - x^k . In this class of acceleration methods, the Jacobian (or inverse Jacobian) needed for the QN algorithm is approximated non-intrusively using snapshots of the past iterations. This generally gives faster convergence rate than Aitken relaxation <cit.>. Several acceleration methods have been developed, each differing in how the Jacobian is approximated, which unknown to consider (For example, x = u, x = f or x = ([ u; f ]) in the case of block formulation (<ref>)) or how the past iteration information is used. In the test cases we show, we used the Quasi-Newton Inverse Least-Squares (IQN-ILS) method, introduced first in <cit.>. The original IQN-ILS algorithm <ref> is recalled in the appendix. §.§ Predictors Another component of iterative FSI schemes is the predictor used to "kick-start" the next time step. Because of the time delay between the two solvers, there is no available "updated" solution for the solid solver to use in the first iteration. Usually, the last converged solution of the previous time step is used as a first guess f^0, n = f^n-1 where the first superscript refers to the iteration number, and the second to the time step index for the current time step, and one superscript for the previous time steps indicates the converged solution. This notation will be used in the rest of the paper. A linear extrapolation from the previous time steps such as f^0, n = 2f^n-1 - f^n-2 can also be used, or, alternatively, a quadratic variation: f^0, n = 3f^n-1 - 3f^n-2 + f^n-3 but whether this gives better convergence highly depends on each FSI problem and can result in bad performance if the time step is not sufficiently small. In this work, we propose an alternative approach to construct this predictor, as explained in the next section. § NON-INTRUSIVE ROM-FOM FLUID-STRUCTURE INTERACTION ACCELERATION In this work, the strategy of data-driven acceleration of FSI problems consists in a combination of two approaches: 1. A data-driven ROM of the solid subproblem similar to the one proposed in <cit.>. 2. An adaptive data-driven force predictor that acts as an efficient generator of a good initial guess of the fixed point FSI problem at each time step, enabling an easier convergence and thus faster overall computations. Note that these two components, although sharing the same methods internally, are independent and can be used separately from one another. In particular, the force predictor can be used in more general cases, and can achieve good computational economies with practically no loss of accuracy, since the final solution depends on the state at the convergence, and rarely on the initial guess. On the other hand, a ROM for the solid part can achieve a speedup up of orders of magnitudes on the solid subproblem, with a small loss of accuracy, and this can be particularly useful if the computation time is predominantly due to the solid solver. In addition, using the two components enable the accumulation of the speedups as well as preventing the slowdown of the convergence rate due to the addition of the solid ROM. §.§ Solid reduced-order model A data-driven reduced order model for the solid behavior is constructed following the approach presented in <cit.>. The ROM approximates the force to displacement function 𝒮: ℝ^N →ℝ^N_S ; f_|Γ_fsi→d where d is the approximated displacement field of the solid domain and N_S is the dimension of the discretized solid domain. The approximation of the solid operator output is then : u = 𝒮(f) ≈ d_|Γ_fsi = 𝒮(f)_|Γ_fsi. Hence, the solid ROM can predict the displacement solution in online computations with a much reduced cost and in a non intrusive manner. The ROM has three main components: the encoder, the regressor and the decoder. §.§.§ Forces encoder ℰ_ℱ(·) At the input of the ROM, an encoder reduces the dimensionality of the forces field at the interface. This is done using the popular POD method <cit.>, hereby looking for a low dimensional linear subspace where the force field is projected: f(X, t, μ) = ∑_i^r_fΦ_fi(X) f_r,i(t, μ) where μ is a parameter of the FSI problem simulated, which, in this work, is associated to the fluid subproblem alone. The rank r_f is the dimension of the POD subspace, ideally very small r_f << N, and Φ_fi are the POD modes. Written in the discretized form, the force field f∈ℝ^N can be represented as: f = Φ_f f_r where Φ_f∈ℝ^N × r_f the matrix of POD modes and f_r ∈ℝ^r_f is the coordinates vector of the force in the reduced POD subspace. In the offline phase, the POD modes are computed using Singular Value Decomposition (SVD) of snapshot matrix of HF results data F∈ℝ ^N × m where m is the total number of available snapshots. F - F= Φ_fΣ_f V^*_f where F is the temporal mean of the fluid forces, Σ_f is the singular values diagonal matrix, and V^*_f is the conjugate transpose of the POD time coefficients. In fact, in this work, simulation results are collected from different values of μ∈𝒫 of cardinal n_μ and from all the simulated n_t timesteps corresponding to t ∈ [0., T], including the non converged global iterations n_k, i ∀ i ∈{1, … n_t n_μ}., giving m = ∑_i^ n_t n_μ n_k, i. Accordingly, and since the POD modes are orthogonal, the encoder acts as the dimensionality reducer of the force field quantity as : ℰ_F(f): ℝ^N→ℝ^r_f ; f_r = Φ_f^T (f - F). §.§.§ Regressor ℐ_𝒮(·) The regressor approximates the relationship between the two low dimensional representations of the forces and the displacements ℐ_S: ℝ^r_f→ℝ^r_u ; f_r →u_r. Different existing methods can accomplish this task. In our experiments, the regression methods that provided the best accuracy are Reduced Basis Function (RBF) interpolation <cit.> and second-order polynomial sparse approximation. In the former, the function is modeled as ℐ_S(f_r) = ∑_i^m w_i ϕ(||f_r - f_r,i||) + P(f_r) where ϕ( · ) is a kernel function, P is the 1^st order polynomial and f_r,i are the RBF centers, chosen as the training points of the reduced forces, resulting eventually in a linear system to be solved for the RBF weights w_i. Alternatively, a polynomial regression of order 2 is used. The Force-Displacement relationship is thus modeled as a second order polynomial, here written in a discretized form: ℐ_S(f_r) = W [f_r⊗f_r] where ⊗ is the Kronecker product. The polynomial coefficients are arranged in W∈ℝ^r_u ×r̂_f, where r̂_f = (r_f+1)(r_f+2)/2. The number of polynomial coefficients can be very large if r_f = 𝒪(10), and it is highly unlikely that all the polynomial terms are important for modeling ℐ(·). We thus propose using the Lasso regularization <cit.> in order to obtain a sparse model with as fewest terms as possible, the minimization is written as: W_i = arg min_W_i ||u_r,i - ∑_j=1^r̂_fW_ij [f_r⊗f_r]_j ||_2^2 + λ∑_j=1^r̂_f |W_ij| ∀ i ∈{1 ⋯ r_u}. The parameter λ promotes the sparsity of the solution W and usually requires fine-tuning. In this work, we find the solution of (<ref>) using the LARS algorithm presented in <cit.> and implemented in <cit.>. In order to avoid additional bias from the training data points distribution, a standardization should be done to the input before the inference. As a general rule of thumb, we recommend using (<ref>) when a large amount of data is available (compared to the number of 2^nd order polynomial terms) and (<ref>) otherwise. §.§.§ Displacement decoder 𝒟_𝒮(·) We use quadratic manifolds as nonlinear decoders for the reconstruction of the displacement field from obtained points in the latent space. Quadratic manifolds model the POD reconstruction error using the 2^nd order polynomial terms of the reduced coordinates in the POD subspace, and associating them with a mapping operator Φ_U <cit.> : 𝒟_S(u_r): ℝ^r_u→ℝ^N_S ; d = U + Φ_U u_r + ∑_j^r_u (r_u+1)/2Φ_U (u_r⊗u_r)_j. Obtaining the POD modes Φ_U∈ℝ ^N_S × r_u in the offline phase is similar to (<ref>) : U - U = Φ_UΣ_U V^*_U and a least-squares problem is solved for the quadratic operator Φ_U : Φ_U = arg min_Φ∈ℝ^N_S × r_u (r_u+1)/21/2 ||(I - Φ_U Φ_U^T) (U - U) - Φ [u_r⊗u_r] ||^2_F. This generally enables a greater reconstruction accuracy for a practically negligible added cost (see <cit.> for more details). Regarding the choice of the latent spaces dimensions, r_u and r_f: for r_u, we use the energy criterion to select the number of modes retaining ε = 99.99% of the energy as: arg min_r_u∈[1, min(N_S, m)] S = ∑_i^r_uσ_i^2/∑_i^min(N_S, m)σ_i^2 s.t. S ≤ε . As for r_f, we use a cross-validation strategy on a small percentage of the available data as test data, until a plateau of the validation error is attained (see <cit.> for more details about this cross-validation strategy). §.§.§ In a nutshell: The solid ROM can finally be symbolically represented as : 𝒮(·) = 𝒟_S∘ℐ_S ∘ℰ_F(·) predicting the displacement field for a given applied force field on the interface: d(f) = 𝒮(f) = 𝒟_S(ℐ_S(ℰ_F(f))). Note that, for predicting the displacement at the interface only, which is the only necessary information at each global FSI iteration, a simple row selection of the POD modes matrix is used in the decoding phase, giving a new decoding operator 𝒟_S, Γ 𝒟_S, Γ(u_r): ℝ^r_u→ℝ^N ; u = d_Γ_FSI = u + Φ_u u_r where Φ_u is the matrix of POD modes after removing the rows corresponding to the degrees of freedom (dofs) outside of the interface Γ_FSI from Φ_U. We accordingly define 𝒮_Γ: ℝ^N →ℝ^N ; f→u = 𝒟_S, Γ(ℐ_S(ℰ_F(f))). Some important remarks are to be made here : * Since only the displacement at the interface is needed at each iteration, only the local version of the ROM 𝒮_Γ is used at each iteration to pass the displacement to the fluid solver. The reduced coordinates u_r of the converged iteration are stored at each time step (in a small U^n_r ∈ℝ^r_u × n_t matrix), so that a reconstruction of the full displacement field in all the time steps can be made in parallel when needed, enabling then a computation of the stress and strain fields for example. * If the solid ROM is constructed to take as input the fluid forces on the fluid grid directly, the ROM prediction can be made without the need of mapping the forces, simplifying further the coupling procedure and making a slight additional computational gain. * In the context of mixed formulations of the solid problem, where other degrees of freedom are present (for example, pressure dofs in incompressible solid problems, or rotation dofs in shell elements), additional ROMs should be constructed for these unknowns. These fields are however not needed to pass the interface displacement to ℱ and the reduced force coordinates can be stored at each converged iteration to compute the full solution when needed (for example at the end of the simulation). An example of this will be demonstrated in test case n°3. §.§ Adaptive data-driven predictors with fluid and solid ROMs In this work, we propose to construct a data-driven predictor, based on the use of information from past data, for example past iterations/time steps, or historical simulation results obtained at different parameters values, thus providing a better initial guess than a finite-differences-based extrapolation. In addition to the solid ROM described in <ref>, another surrogate is constructed to approximate the inverse of 𝒮 at the interface, i.e approximating the u to f relationship. Note that, contrary to the solid ROM presented above, this fluid surrogate should take into account the dynamical effects ℱ̂(u, t) = f . This is done using a discrete approach, where the input of the surrogate contains not only the current displacement but also the forces at the previous time step ℱ̂(u^k, n+1, f^k, n) = f^k, n+1 . This is in principle similar to the Dynamic Mode Decomposition with Control (DMDc) <cit.> concept, the only difference here being the consideration of the system nonlinearity through the nonlinear regressor and/or the update strategy adopted. §.§.§ Prediction of a better initial guess: Inner coupling A "reduced FSI coupling" is launched at the start of each time step, where each subsystem ℱ and 𝒮 is replaced by its reduced equivalent ℱ̂ and 𝒮̂, and the solution ℱ̂(𝒮̂(f^0)) = f^0 is sought at a fraction of the computational time needed for the FOM-FOM problem solver calls. Those iterations will be called henceforth local iterations. In <cit.>, reduced physics-based models were used as surrogates to compute the local iterations, specifically to enhance the inverse Jacobian approximation in IQN-ILS, while the new initial guess did not particularly produce less overall computational time. We note that using our proposed data-driven models, and constructing them so that they consider inputs and outputs on the same grid (the fluid grid in our case), allows to bypass the need for grid mapping. In addition, we propose an adaptation strategy to update the prediction capability of the fluid surrogate, especially since new HF data from the fluid FOM solver is available at the online computations. We note that the tolerance required for the convergence of these local iterations δ_r should not be very small, since the goal here is merely to obtain a closer initial guess than the previous time step solution, and to avoid a large number of iterations that could slow down computations. In the same direction, we propose that these local iterations use a simple Aitken relaxation (<ref>), since using QN-acceleration would add a computational cost to find and store an additional inverse Jacobian approximate, while the tolerance required is already not very small. Furthermore, if the convergence is not reached, the predictor should be deactivated, and the previous time step solution should be used instead, since poor convergence could suggest that the dynamical nonlinearity is not captured by the two ROMs involved, and thus their predictions may be too inaccurate and lead to poor global convergence. In Algorithm <ref>, we detail the local iterations' procedure of the reduced models. In the next part, the components of ℱ̂ will be detailed. §.§.§ Fluid ROM components: The fluid ROM ℱ is constructed with the same philosophy as its solid counterpart 𝒮_Γ, in the sense of finding a relationship in a latent space. This can be done using two approaches: using separate encoders for the displacement and forces and the reduced representation is composed of the two encoded variables: ℱ: ℝ^2 N→ℝ^N ; (u, f^n-1) →f = 𝒟_F(ℐ_F([ℰ_S(u), ℰ_F(f^n-1)]^T)) . The second approach, denoted as ℱ_H consists of constructing a new unknown from the concatenation of the 2 variables in the high-dimensional space, with a new hybrid encoder ℰ_H(·): ℱ_H: ℝ^2 N→ℝ^N ; (u, f^n-1) →f = 𝒟_F(ℐ_F(ℰ_H(u, f^n-1))) . The two ROMs' components are summarized in the illustrative Figures <ref> and <ref>. The fluid ROM ℱ differs from 𝒮_Γ in that: * The decoder 𝒟_F(·) and encoder ℰ_S(·) are the equivalent of the inverses of their encoder and decoder counterparts ℰ_F(·) and 𝒟_S, Γ(·), respectively. They are then defined as : 𝒟_F(f_r): ℝ^r_f→ℝ^N ; f = F + Φ_f f_r ℰ_S(u): ℝ^N→ℝ^r_u ; u_r = Φ_u^T ( u - U) . This means that in the case of ℱ, no additional training cost is spent for obtaining 𝒟_F(·) and ℰ_S(·), since the modes are already computed during the training of 𝒮_Γ. As for ℰ_H(·), this is also done using the POD but on the new variable (u, f^n-1) ℰ_H(u, f^n-1): ℝ^2N→ℝ^r_h ; h_r = Φ_H^T ([u, f^n-1]^T - [U, F]^T) . * A new regressor ℐ_F is used that takes into account the dynamical effect using the previous timestep force f^n-1 in the augmented regression input ℐ_F: ℝ^r_f+r_u→ℝ^r_f ; [u_r^k ,f_r^n-1]^T →f_r . While different regression forms can be used, we use here linear regression or the RBF regressor (<ref>), which in our view, represent a good trade-off option between fitting efficiency and training efficiency, which is crucial for our method since an online update strategy is adopted (more details will be given in Section <ref>). * Note that in order to obtain accurate evaluations of ℱ and thus reliable predictors, the direct output of the HF fluid solver ℱ should be "seen" during training. We recall that in the available HF data, we distinguish between the forces computed from the fluid solver f^k, and the QN-updated forces f^k, arranged in a new snapshot matrix that we call F. Although, for obvious reasons, the new regressor ℐ_F should necessarily use F (or rather, its reduced coordinates Φ_f^T F). Only a combination of F and F suffices to compute the POD modes Φ_f. In our experiments, we used the concatenation of all the snapshots [F, F], but a more efficient choice could be for example to only include the first iteration results in F since the other iterations' resultant forces –closer to FSI convergence– would be very close to the updated forces, and would only add little information for computing the POD subspace. In our numerical experiments, the two approaches ℱ and ℱ_H yield nearly identical results, we then proceed with detailing only the first approach ℱ since its associated offline step is computationally more efficient (dropping the need of learning ℰ_H). Remark: We should note here that, with our choices of the dimensionality reduction methods, the evaluation of 𝒟_S(·) and ℰ_S(·) can be dropped at the local iterations (Algorithm <ref>) during the inference of the two ROMs ℱ and 𝒮, since, in our case, using POD as our encoder-decoder implies that ℰ_S(𝒟_S(u_r)) = u_r. However, the full force field must be recovered at each iteration (of the reduced fixed-point problem) because the relaxation used in line <ref>, together with an initial guess from the fluid solver (line <ref>) means that f^k contains a part the falls outside the POD subspace Φ_f. Thus, the convergence should be verified at the high-dimensional space rather than at the latent space. Furthermore, as stated earlier, and as shown in <cit.>, the force field is not easily compressed using POD, and the ignored modes may be necessary as in the contribution to the force values used when checking the convergence. The offline strategy of the solid and fluid ROMs are outlined in Algorithm <ref>. §.§.§ Fluid ROM online update: At the end of the offline phase, the full snapshot matrices are no longer of use, since their reduced representation is now learned, and thus can be freed from memory. However, we store a chunk of size p from the reduced coordinates of the training input and output data of ℱ: U_r∈ℝ^r_u × p and F_r∈ℝ^r_f × p as: U_r = [ [ ; u_r^1 u_r^2 … u_r^p; ]] and F_r = [ [ ; f_r^1 f_r^2 … f_r^p; ]] . Theses reduced representations of data points will be updated in the online computations by replacing the data points with the least relevant information by new snapshot couples of [u_r^k | f_r^k] available at each iteration : U_r ←[ [ ; u_r^2 u_r^3 … u_r^p u_r^k; ]] and F_r ←[ [ ; f_r^2 f_r^3 … f_r^p f_r^k; ]] . These new (reduced) snapshots represent the high fidelity information stemming from the HF fluid solver operations. It can thus be used to enrich the fluid ROM. Specifically, after a certain advancement along the transient simulation, defined for example by a number Z (chosen by the user) of global FSI iterations, these now updated matrices U_r and F_r can be used to retrain the fluid regressor (<ref>). By doing so, we ensure that the fluid ROM ℱ̂ maintains a high enough fidelity so that the provided initial guess does indeed help the FSI converge faster, especially since the newest information, from the latest time steps and iterations will be used. Note that the matrices U_r and F_r are of small sizes since r_u << N and r_f << N. In addition, the "width" of these matrices can be kept at its maximum p, also defined by the user, keeping the memory cost small and constant. As for the CPU cost, retraining the regressor (<ref>) or linear regression, i.e solving a linear system of size (p+r_u+r_f) × (p+r_u+r_f) or a p × (1+r_u+r_f) least-square system, and only after (a large) number of iterations Z, represents only a small fraction of the FSI timestep solution. It should be emphasized that this online update strategy is limited to the update of the regressor for simplicity. An extension of this approach could be done by updating the encoder-decoder as well. For example, a recursive method like the one presented in <cit.> could be used to update the POD bases using data obtained on the fly. §.§ Summary To summarize, the proposed acceleration method can be easily implemented in a non-intrusive manner. Using HF data obtained with varying fluid parameter simulations, two separate ROMs can be trained efficiently and then used in a new simulation with unseen values of the parameters. This added block in the FSI scheme provides an initial force field that will "jump-start" the coupling at each time step resulting in faster convergence and faster overall computation. During the new unseen simulation, the fluid ROM can be adapted using the online results coming from the FOM fluid solver, constituting a phsyics-aware predictor that can be used effectively even in extrapolative regions. The global FSI iterations with the new prediction approach is outlined in Algorithm <ref>. We stress once again the nonintrusivity of this approach, since calling external solvers as black-boxes as done in lines <ref> and <ref> is completely possible. Moreover, additional calculations can be done in line <ref> using the imposed (already computed) displacements to compute for example the stress and strain fields. A sketch of the global methodology is shown in Figure <ref> for illustration. Comment/* */ KwDataInput § RESULTS In the following, we intend to evaluate the performance of the ROM strategy proposed in terms of both the gained of CPU time (speedup) and number of fixed-point iterations. We demonstrate this on three transient FSI test cases with a very strong coupling, and high added mass in 1D, 2D and 3D, respectively, with low Re (laminar) flows. In each case, we apply the proposed ROM-based predictors on the problem in 2 different configurations: FOM-FOM and ROM-FOM, with the latter using a solid ROM in the actual solid computations. Since the computational time is almost always dominated by the solvers' internal iterations, and the addition of the predictors only add a fraction of that time, the comparisons will be mainly done using the number of coupling subiterations. We assess how the addition of the new data-driven predictors affects the number of needed iterations. For the sake of completeness, we also compare in terms of the total CPU time spent to ensure that speedups are realized, especially in the last 2 (more realistic) cases. We also check that, in the case of replacing the solid solver with a solid ROM, the errors of the computed fields are sufficiently low due to the predictive ability of the solid ROM. The coupling operations are done using the coupling component in the multiphysics simulation code <cit.>. In what follows, we will denote 𝒮_Γ and ℱ as SROM and FROM respectively. The standard predictors in (<ref>), (<ref>) and (<ref>) will be denoted by Constant Extrapolator, Linear Extrapolator and Quadratic Extrapolator respectively, while the new proposed predictor will be called Data-Driven Predictor. §.§ Example 1: 1D flexible tube model The model of flexible tube and related HF partitioned solvers proposed by <cit.> are used here. The flow is assumed to be incompressible with constant density ρ. Both fluid mass and momentum conservation equations (neglecting viscosity) read ∂_t a + ∂_x(a v) = 0, ∂_t(av)+ ∂_x(a v^2) + aρ ∂_x p = 0, t>0, x∈ [0,L] where v is the velocity, a is the time-dependent tube cross section and p is the pressure. From the fluid side, the unknowns are both velocity and pressure. For the thin flexible tube with a thickness h_s, a quasi-static model a = a(p) is used (retaining only the vessel stress in the circumferential direction). The following nonlinear elastic stress-strain law σ_φφ ( ϵ_φφ) is used <cit.>: σ_φφ = E ϵ_φφ if |ϵ_φφ|< ϵ_0 σ_φφ = E/5 ϵ_φφ + 20 if ϵ_φφ≥ϵ_0 σ_φφ = E/5 ϵ_φφ - 20 if ϵ_φφ≤ -ϵ_0 with ϵ_0=2 10^-3 and E=12500 Pa. Figure <ref> shows a schematic description of this problem. A non-reflective boundary condition is used on the x=L boundary as du/dt = 1/cdp/t with c the fluid wave speed c=√(a/da/dp). The prescribed inlet (x=0) velocity v_0 is computed using the solution of a nonlinear Duffing equation in order to evaluate the ROM performances in problems with complex dynamics, and the capacity to benefit from the HF fluid solver output to handle the dynamics: ü(t) = a u(t) + b u(t)^2 + c u(t)^3 + d + p cos(f t) + e u̇(t) ∀ t ∈ [0, 120] u(0) = 10 ; u̇ (0) = 0. v_0(t) = (g u(t) + h) r(t) r(t) = 1 ∀ t ∈ [0, 20] r(t) = 0.9+0.1 sin(t π / 40) ∀ t ∈ [20, 60] , #Negative ramp function r(t) = 0.8 ∀ t ∈ [60, 120] . We fix (a, b ,c ,d ,e ,g , p) = (-1,0 ,-0.002 ,-1 ,-0.02 ,1/60,360) and we parameterize this signal with the parameter vector μ =(f ,h )^T allowing the generation of different frequencies and amplitudes. For this study, the two values μ_1 = [2, 6]^T and μ_2 = [0.9, 4]^T have been selected, leading to the signals shown in Figure <ref>. The fluid flow equations (<ref>) are solved using a second order finite volume scheme with 100 cells and the solid section a(p) is computed at each iteration as the solution of the scalar minimization problem p√(a/π) = σ_φφ(√(a/π) - r_0/r_0) h_s . The solid subdomain in this case is the interface itself, and the nodes from the solid and fluid sides match each other. The nondimensional time step τ = u_0 Δ t/L and nondimensional stiffness κ = √(E h_s/2ρ r_0 v_0(0)^2) are chosen as τ = 0.05 and κ=21. Note that the authors in <cit.> showed that the coupling with values of τ and κ of this order is strong and standard Gauss-Seidel iterations quickly fail to converge. FSI subiterations tolerance used here is δ = 10^-4 and the reuse number of IQN-ILS used is q=2. In order to train the ROM model, a FOM-FOM computation is done on a single inlet velocity case corresponding to μ_1 until T = 35 s. The offline computations of the ROM models are then performed on the available results, giving data snapshots of size m = 1759 (see (<ref>)). The latent dimension of the displacement POD subspace is r_u=4 and r_f=10 for the pressure field. Regarding the regressor, a thin plate spline kernel RBF <cit.> is used with ϕ( x ) = x^2 log(x) for both ℐ_S(·) and ℐ_F(·). For the local iterations the convergence criterion is δ_r = 0.02, the iteration-frequency of the model update is Z = 200 and p = 1640 is chosen as the batch size. The performance of the ROMs is evaluated on the future time prediction of the simulation with μ = μ_1, i.e for t ∈ [35 s , 120 s]. In addition, we also test the ROMs and predictors on the unseen parameter value μ_2. First, we check the accuracy of the solid ROM prediction in the ROM-FOM coupling: we show in Figure <ref> the outlet section evolution in time comparing the FOM-FOM and the ROM-FOM solutions. We also demonstrate the solid nonlinearity well predicted by the ROM in Figure <ref>, which is expected since the region of deformations reached during the prediction was well present in the training data. We show in figure <ref> the total number of iterations performed using the different predictors for μ_1, and figure <ref> for μ_2. The results clearly show that the new data-driven predictor provides the best efficiency in terms of number of iterations. We can see that using the ROM-FOM with constant predictors (right figures) leads to additional iterations due to the inaccuracy of the solid ROM. This effect is no longer observed when using the data-driven predictor. Moreover, while the use of linear and quadratic predictions result in fewer iterations than the constant predictor case, they are outperformed by the use of data-driven-based initial guesses. In Figure <ref> and for two time steps, we show how the data-driven initial prompts a faster rate of convergence, as we can see that the data-driven predictor results in a first iteration with a much lower residual than with a quadratic extrapolation. §.§ Example 2: Hyperelastic flaps in a channel behind a cylinder wake In this section, we consider the problem first introduced in <cit.>, and illustrated in figure <ref>, where an incompressible flow in a 2D channel faces a massless elastic body with two mounted flaps behind a rigid cylinder. For the fluid, the Navier-Stokes equations read ρ_f ∂v∂ t_|𝒜 + ρ_f [(v - w).∇]v + ∇ p - 2 div(μ_f D(v)) = 0 in Ω_f(t) ∇·v= 0 in Ω_f(t) with p the fluid pressure, ρ_f the fluid density, μ_f the fluid dynamic viscosity and D(v) is the fluid strain rate tensor. The fluid equations are described on a moving domain (using the ALE moving frame) Ω_f(t). The notation 𝒜 represents the ALE mapping from the reference domain (the t=0 configuration) to the computational domain and w is the ALE velocity. In this case, we have ρ_f = 1000 kg/m^3, μ_f = 0.001 m^2/s and a fully developed Poiseuille inlet flow is applied, with a maximum velocity of v_max = 2.5 m/s starting from v = 0 m/s at t = 0s and increased by a sinusoidal ramp until reaching v_max at t = 1 s. This corresponds to a Reynolds number of Re = 250, based on v_max and the cylinder diameter. The boundary condition at the top and bottom walls is a no slip condition, and a zero pressure on the right boundary. For the solid subproblem, the equilibrium and constitutive equations for a static hyperelastic solid are: ∇_XP = 0 in Ω_s P = ∂ W∂F_s u = 0 in Γ_D, s . The equations are written in the Lagrangian frame with ∇_X the gradient operator in the original configuration, P is the first Piola-Kirchoff stress tensor (PK1) and Γ_D, s is the Dirichlet boundary. The material model is described in the stored energy density function W, here using the hyperelastic Neo-Hookean model: W(F_s) = λ_s/2 (ln(J))^2 - μ_s ln(J) + μ_s/2(trace(C_s) - 3) where J = det(F_s) is the determinant of the deformation gradient tensor F_s, C_s = F_s^T F_s is the right Cauchy Green deformation tensor, μ_s = E2 (1 + ν_s) and λ_s =E ν_s/(1 + ν_s)(1 - 2 ν_s). In this example, E = 10 × 10^6 Pa and ν_s = 0.3. The coupling conditions (<ref>) and (<ref>) are imposed on the FSI interface. The fluid problem is discretized using 5440 variational multiscale (VMS) finite elements <cit.> and 1640 quadrilateral plane strain finite elements are used for the structural problem, with 8 (X and Y) displacement degrees of freedom at element nodes, making a total of N_S = 3610 solid dofs. <cit.> was used as the finite elements code for both problems, using the modules and as separate solvers in a partitioned coupling. The fluid time step is Δ t = 8×10^-3 s and the second-order "Bossak" time integration scheme is used<cit.>. The interface grid has matching nodes from the solid and fluid sides and consists of 265 nodes at the interface, giving N = 530. The relative convergence tolerance used is δ = 0.005 and IQN-ILS with reuse q = 3 is used as the QN acceleration scheme. Figure <ref> shows the ROM-FOM solution at t=7.224 s. We define the parameter as the Reynolds number μ = Re and we use the results from simulations of three points : μ∈{180, 205, 250} solved for t ∈ [0, 3.6 s], generating a total of m = 7410 snapshots. The evaluation of the new ROM predictor will be then done on an unseen parameter μ = 192 and on a larger simulation time t ∈ [0, 8 s]. For the dimensionality reduction, r_u=9 displacement modes and r_f=45 force modes are used. For the SROM regressor ℐ_S(·), a 2^nd order polynomial regression is used with an L_1 regularization term as described in (<ref>). The FROM regressor ℐ_F(·) is chosen here as an RBF function with a cubic kernel ϕ( x ) = x^3 . For the local iterations convergence, the tolerance is δ_r = 2 %, the FROM is retrained every Z=200 iterations, and the reduced data batch size p is 6900. The solid ROM gives an accurate model prediction when coupled with the fluid FOM as shown in Figure <ref> where the left tip x-displacement is plotted and compared using the ROM-FOM and FOM-FOM coupling. The displacement values used in the offline training at different Re numbers are also shown in the same figure. The accuracy of the displacement field is reported as the relative error e(t) = ||d(t) - d(t)_FOM||_2/ <||d(t)_FOM||_2> with <·> represnting the time-average. We can see in the figure it remains under 7% even for such a long simulation time. In figure <ref>, we show the strain prediction through the Green-Lagrange strain tensor E = 1/2 (C_s - I) and the overall nonlinear material behavior using the response of one element as an example, showing the significant accuracy of the SROM prediction. This reaffirms that the SROM is indeed able to capture the nonlinearity. The accumulated number of fixed-point iterations over the simulation time is reported in Figure <ref> for the different predictors. Once again, the novel data-driven predictor produces the least number of FSI iterations. More details about the average number of iterations are given in Table <ref>. In addition, in the second column of Table <ref>, we also demonstrate the efficiency in terms of the CPU time spent when using the different predictors. As the evaluation of the FROM and the local SROM are inexpensive at the start of each time step, and keeping in mind the negligible cost of the FROM update, the data-driven predictor does indeed result in a smaller overall CPU time. Using the solid ROM to replace the FOM allows even greater speedup, especially since, in this problem, the solid FOM solver takes nearly twice as CPU time as the fluid FOM solver. We note that, taking advantage of the fluid FOM, the presented approach performs very well, in terms of the SROM accuracy, and the faster fixed-point convergence in spite of the complex dynamics of this FSI problem. The on-the fly update of the FROM also prevents inaccurate predictions when these unseen complex dynamics take place. In order to highlight the importance of using the accelerated forces f and not the fluid output f as the SROM input (See the section <ref> above), we show in Figure <ref> the phase space composed of the first 2 components of their latent representations f_r and f_r. The values shown correspond to the forces obtained during the training simulations. We can see that the variance is much bigger with f_r and this poses difficulties on the training of SROM. Bigger data variance also means that a bigger range of the solid nonlinearity is included in the data, since bigger force amplitudes are reached with f_r than with f_r. As a last indication, the effect of the FROM update is first assessed by looking at the difference of total number of fixed-point iterations achieved with and without the update. For the unseen parameters Re=192, the benefits of the update is clearly shown in Figure <ref> left where the number of fixed-point iterations is always diminished. More importantly, when considering a parameter Re = 139 outside of the Reynolds number training interval [180, 250]. Figure <ref> center shows that hundreds less convergence iterations are needed overall, i.e for all the time steps. Second, we wish to demonstrate the evolution of the model error along the update iterations. Offline data are stored from the last 1000 iterations' results of the FOM-FOM simulation on the unseen parameter Re=139. The FROM is then evaluated on this test data and incrementally updated on each Z-sized batches of offline results from the beginning of the simulations onward. The error is computed after each FROM update as the mean relative maximum error and reported on Figure <ref> right. We can clearly see the decrease of the FROM error along the update increments, highlighting the ability of the model to leverage its updates for extrapolation. §.§ Example 3 : 3D hyperelastic incompressible flaps: To show the performance of our methodology in larger scale problems, we consider a 3D extension of the previous problem. The figure <ref> gives a brief description. The boundary faces corresponding to the top, bottom, back and front walls of the geometry in the left of Figure <ref> will be called henceforward y+, y-, z- and z+ respectively. To simplify the configuration, no cylinder is present in this problem but a fully developed pulsatile inlet flow is applied with a signal composed of two frequencies (f_1, f_2) as illustrated in figure <ref> and expressed as: v_|x=0(y, t) = 1/2(1-cos(π t)) g(y) ∀ t ∈ [0, 1] v_|x=0(y, t) = 1/16(16+(1-cos(2π(t-1)))) g(y) ∀ t ∈ [1, 1.5] v_|x=0(y, t) = 1/16(16+(cos(f_1π(t-1.5)))+(cos(f_2π(t-1.5)))) ∀ t ∈ [1.5, 6.6] g(y) = 11.8 y (0.492-y) with μ_1 = (f_1, f_2) = (4 Hz, 5 Hz) the parameter values used for training and μ_2 = (f_1, f_2) = 2 Hz, 3 Hz) for testing. Slip conditions are used on the z- and z+ faces, no-slip conditions are imposed on the y- and y+ faces and a zero pressure is imposed on the outlet. The discretization uses 437039 elements with 84988 nodes. This flow setting corresponds to a Reynolds number Re = 225 based on the maximum inlet velocity and the length of the solid flap. For the solid material, a nearly-incompressible Neo-Hookean material is used with ν_s = 0.485, and u-θ mixed tetrahedral elements are used where θ represents the Jacobian determinant θ = J ≈ 1. The mixed formulation is stabilized based on the VMS approach (See <cit.> for more details on the element used here). As a consequence, two solid ROMs are constructed: an SROM in a similar manner to the previous cases, and a second θ-ROM for the θ field. In fact, in order to compute the strain and stress a posteriori, an accurately computed θ field must be available, and in our ROM-FOM approach this is done only after convergence of each time step, since only the displacement at the interface is needed at the other fixed-point iterations. The solid mesh consists of 65400 tetrahedral elements with 15338 nodes. In order to ensure a valid discretization, a mesh independence study was done on the fluid and the solid domains separately. Details on this study are reported on the Appendix <ref>. The second-order Bossak time integration is used by the fluid solver with a time step of Δ t = 0.01 s. Similarly to the previous test case, matching interface grids are used, consisting of 7575 nodes at the interface, giving N = 22725, and IQN-ILS with reuse q = 3 is used with a fixed-point tolerance of δ = 5×10^-4. In Figure <ref>, we show the solution of the problem at t = 3.37 s. The solid regressor ℐ_S(·) used here is a thin plate spline RBF interpolator, and ℐ_F(·) is a ridge regression with a regularization parameter λ = 1×10^-5. For dimensionality reduction, r_u=12 displacement modes and r_f=280 forces modes are kept. For the data-driven predictor, the model update is computed after each Z=240 fixed-point iterations and the batch of the reduced snapshots is of size p = 6900. The tolerance used here is δ_r = 1×10^-4. The FOM-FOM simulation using μ_1 and for all the time steps until t = 6.6 s generates m=2962 snapshots for the training of SROM and FROM. The SROM shows a significant accuracy on the displacement prediction, as seen in Figure <ref>, where the displacement evolution of the left tip and right tip of the z+ face are shown while comparing the FOM-FOM and ROM-FOM solutions. The prediction of the stress field, namely the PK2 stress is also obtained with high accuracy as seen in Figures <ref> and <ref> where the SROM and θROM accurate output clearly lead to accurate stress predictions as well. The addition of the FROM and the local iterations for the data-driven initial guess results in much less total fixed-point iterations. This is clearly seen in Figure <ref>, where the data-driven predictor outperforms the classical approaches. The average number of fixed-point iterations and the total CPU time are also reported in Table <ref>. Remarkably, the linear and quadratic extrapolators result in slower convergence than the constant extrapolation, while the data-driven predictor ensures a faster convergence of the fixed-point FSI problem, showing the improved robustness of the data-driven approach for predicting the initial guess. § CONCLUSIONS In this work, a novel data-driven predictor for the acceleration of convergence of unsteady partitioned fluid-structure interaction coupling has been proposed. This predictor provides an enhanced initial guess for the FSI fixed-point problem at each time step. It is obtained by resolving a reduced fixed-point problem that can be solved at the beginning of the time step for a small fraction of the computational time of the regular FSI problem. This is achieved using two reduced order models for the solid and the fluid problems by approximating the force-to-displacement and the displacement-to-force relationships respectively. The two models are then strongly coupled to predict the initial guess when this reduced fixed-point converges. Each reduced order model is constructed from three components: An encoder, a regressor and a decoder. The encoder-decoder uses the POD and quadratic manifolds for the dimensionality reduction and the regressors use either RBF functions or polynomial regression. The data-driven nature of this predictor makes it more robust and efficient than the classical approach, since it uses the information from the results of the high-fidelity solver, instead of using finite-differences from the last few time steps. Moreover, the regression model of the fluid ROM is updated online using the high-fidelity forces from the fluid FOMs, enriching further the ROM from the latest available HF data. Overall, the proposed methodology leverages physics-based insights from the high fidelity fluid solver, thus establishing a physics-aware machine learning predictor. This enables the use of the predictor in extrapolating regions of the time-parameter space. This predictor can also be combined with the solid ROM presented in <cit.> for replacing the solid solver as a whole to predict the solution at an even cheaper computational cost. In the paper, and through three examples with strong FSI coupling and neglected solid inertia, we have demonstrated the performance of this novel predictor in achieving faster convergence of the fixed-point problem compared to classical extrapolators. In particular, we showed the significant computational gain that can be achieved with this predictor, even when applying it for unseen time and parameters, even in an extrapolation setting, and even for fairly complex dynamics of the FSI problem, all while keeping a very high accuracy of the ROM when replacing the solid FOM. We showed that designing an FSI predictor with such a data-driven strategy makes it more robust for easing convergence than the classical extrapolators, since the data-driven ROM benefits from recent HF data more judiciously. The ROM update strategy presented in this work could be further enhanced in order to obtain more accurate and adapted ROMs: for example, instead of a straightforward retraining of the regressor component of the ROM only (as done in this work), an update strategy of the dimensionality reduction part (namely the POD bases update), could also be done using the online HF data. We believe that such a strategy will eventually lead to faster fixed-point convergence, since new force values at prediction time can lie outside the available POD subspace obtained from offline data. This adaptive encoder/decoder approach will be pursued in future researches. § ACKNOWLEDGEMENTS This work has been funded by the ANR (Agence Nationale de la Recherche, France), Altair Engineering and Michelin under the project AHEAD. § IQN-ILS ALGORITHM § MESH CONVERGENCE STUDY To conduct the mesh convergence study efficiently, the solid and fluid problems are treated independently, i.e in a decoupled way, as the goal is merely to check the validity of the chosen mesh sizes even if the actual FSI problem is different, assuming the other conditions are close enough A static surface load is applied on the left solid face, with a load amplitude comparable of that occurring during the FSI problem. The fluid problem for this study differs from the coupled problem in that only a constant inlet velocity is applied instead of a pulsatile inlet. §.§ Solid mesh: A constant pressure of p = 1800 Pa is applied on the left face and three different meshes with three different (average) mesh sizes on the boundaries are used: h_1 = 20 mm, h_3=6.25 mm and h_2=11.75 mm. The three meshes are illustrated in Figure <ref>. Note that the mesh size changes locally as it is reduced near the corners. The most interesting quantity in the problem - the displacement - is reported on the right tip and left tip at the z+ face for the three grids and plotted on Figure <ref>, from which we see that the h_2 displacement falls under 1% from that of the reference value, assumed to be the one associated to the fine h_3 mesh. From there, we conclude that the h_2 grid is sufficient for the use of our FSI problem. §.§ Fluid mesh: Similar to the FSI problem, a Poiseuille inlet flow velocity is applied until t=6.6 s for this study, with the difference here is that the inlet flow is constant in time whether a pulsatile flow is applied on the FSI example. The value of the inlet velocity corresponds to the maximum attained on the FSI problem inlet. Three grids with varying average mesh sizes on the boundaries h_1=40 mm, h_2=18 mm and h_3 = 12 mm. Note again, that locally, the mesh size changes as it is refined near the FSI interface and in the region between the flaps. The three grids are shown in Figure <ref>. The axial velocity profile 120 mm on the right FSI interface is shown in Figure <ref> for the three different grids, using the mean of the time steps corresponding to the last 4.6 s. The streamlines of the velocity field at t=6.6 s are also shown on the mid section of the channel in Figure <ref>. In addition, we used the Grid Convergence Index (GCI) method from the ASME Journal of Fluids Engineering policy for mesh convergence <cit.>. The grid refinement factor between each two meshes is indeed greater than 1.3 and using the maximum velocity of the profiles reported in Figure <ref> as the main quantity, the GCI index obtained is 4.9% which is we considered acceptable. From all these results, we concluded the choice of using the grid with h_2 for our FSI problem. elsarticle-num-names
http://arxiv.org/abs/2405.09793v1
20240516034743
Assessing carrier mobility, dopability, and defect tolerance in the chalcogenide perovskite BaZrS$_3$
[ "Zhenkun Yuan", "Diana Dahliah", "Romain Claes", "Andrew Pike", "David P. Fenning", "Gian-Marco Rignanese", "Geoffroy Hautier" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
Direct ab initio calculation of the ^4He nuclear electric dipole polarizability James P. Vary =============================================================================== § ABSTRACT The chalcogenide perovskite BaZrS_3 has attracted much attention as a promising solar absorber for thin-film photovoltaics. Here, we use first-principles calculations to evaluate its carrier transport and defect properties. We find that BaZrS_3 has a phonon-limited electron mobility of 37 cm^2/Vs comparable to that in halide perovskites but lower hole mobility of 11 cm^2/Vs. The defect computations indicate that BaZrS_3 is intrinsically n-type due to shallow sulfur vacancies, but that strong compensation by sulfur vacancies will prevent attempts to make it p-type. We also establish that BaZrS_3 is a defect-tolerant absorber with few low formation energy, deep intrinsic defects. Among the deep defects, sulfur interstitials are the strongest nonradiative recombination centers which in sulfur-rich conditions would limit the carrier lifetime to 10 ns. Our work highlights the material's intrinsic limitations in carrier mobility and suggests suppressing the formation of sulfur interstitials to reach long carrier lifetime. § INTRODUCTION Lead halide perovskites have revolutionized the field of photovoltaics (PV) by opening a promising path to earth-abundant, easily processable, and high-efficiency thin-film technologies.<cit.> The exceptional PV performance of halide perovskites is however shadowed by their poor stability.<cit.> Structural analogy has motivated the search for alternative solar absorbers forming in the perovskite structure but in chemistries that could be more stable.<cit.> The chalcogenide perovskites ABX_3 (A=Ca, Sr, Ba, B=Ti, Zr, and X=S, Se) have emerged in this context with their first suggestion as solar absorbers coming from first-principles studies<cit.> followed by experimental synthesis and characterization especially of BaZrS_3.<cit.> BaZrS_3 shows excellent stability in ambient conditions and exhibits a ∼1.8 eV direct band gap which can be tuned to 1.5 eV by alloying with BaTiS_3 or BaZrSe_3.<cit.> Significant efforts have been dedicated to growing high-quality thin films of BaZrS_3 and its alloys, using a range of techniques such as pulsed laser deposition,<cit.> sputtering,<cit.> molecular beam epitaxy,<cit.> and solution-based synthesis.<cit.> Very recently, a proof-of-concept BaZrS_3 solar cell has been reported, demonstrating an efficiency of 0.11%.<cit.> Interestingly, BaZrS_3 also stands out as a top candidate in a few high-throughput computational screening of thin-film solar absorbers.<cit.> In this letter, we use first-principles calculations to clarify the carrier transport and defect properties in BaZrS_3. We compute the phonon-limited carrier mobility showing that BaZrS_3 has intrinsically low hole mobility. We also perform state-of-the-art hybrid-functional defect calculations. We show that BaZrS_3 is intrinsically n-type, and that p-type doping of BaZrS_3 will be very difficult due to strong compensation by intrinsic donor defects. While BaZrS_3 shows high defect tolerance with few low formation energy, deep defects, the sulfur interstitial (S_i) is identified as the most worrisome nonradiative recombination center. Our results suggest pathways regarding growth condition optimization and device design towards high-performance BaZrS_3 absorbers. § RESULTS §.§ Electronic band structure and carrier transport BaZrS_3 forms in an orthorhombic Pnma perovskite structure (see Figure 1a). Our calculated band structure using the Heyd-Scuseria-Ernzerhof hybrid functional (HSE06)<cit.> shows a direct band gap of 1.81 eV at the Γ point (Figure 1b), in agreement with previous calculations and experiment.<cit.>. Combining computed electronic and phonon properties, we find a phonon-limited carrier mobility of 11 cm^2/Vs for holes and 37 cm^2/Vs for electrons, with the carrier scattering mechanism dominated by polar optical phonons (see the supporting information for details). These values are upper bounds as realistic polycrystalline films will have additional scatterings from grain boundaries, impurities, and others. They are much lower than those calculated for conventional thin-film inorganic solar absorbers (such as CdTe<cit.> and Cu_2ZnSnS_4<cit.>). The calculated hole mobility of BaZrS_3 is also lower than for the halide perovskite CH_3NH_3PbI_3 (11 vs 47 cm^2/Vs), yet these two materials have comparable calculated electron mobility.<cit.> Our results are consistent with experiments on BaZrS_3 thin films which indicate low carrier mobilities (∼2 cm^2/Vs for holes and ∼10–20 cm^2/Vs for electrons).<cit.>The measured low carrier mobility has been often attributed to small grain size or impurity scattering.<cit.> While these could be limiting factors in the experiments, our results highlight that BaZrS_3 has intrinsically low phonon-limited carrier mobility, and that experimentally it is very unlikely to achieve mobilities higher than our computed values. We note that Ye et al. reported a very high sum mobility (>100 cm^2/Vs) in BaZrS_3 films based on time-resolved photoluminescence (TRPL) measurements but the data suffer from reported very large uncertainty.<cit.> The large difference between hole and electron mobilities directly comes from the electronic band structure (see Figure 1b). The lower conduction bands are much more dispersive than the upper valence bands. As a result, the effective mass is found to be small for electrons (0.3 m_0) and relatively large for holes (0.9 m_0). The fundamental difference in hole effective mass and mobility between BaZrS_3 and CH_3NH_3PbI_3 comes from the different electronic character in the valence band. While the halide perovskite mixes anion and cation orbitals leading to delocalized valence band,<cit.> the sulfide shows a more ionic behavior with the valence band being mainly of anion character (see Figure 1b). §.§ Intrinsic point defects and doping We have calculated all the intrinsic point defects in BaZrS_3 including the vacancies (V_Ba, V_Zr, V_S), interstitials (Ba_i, Zr_i, S_i), and antisites (Ba_Zr, Zr_Ba, Ba_S, S_Ba, Zr_S, S_Zr). Our first-principles calculations are all performed using the HSE06 hybrid functional and large 3×3×2 supercell (360 atoms) with proper charge correction and spin-polarization, which is different from previous first-principles calculations.<cit.> We provide in the supporting information the details of our methodology and a comparison to previous calculations. We note that some of the defects (e.g., S_i) involve several configurations that are close in energy. In the following, we report only results for the lowest energy configurations, while those for metastable configurations can be found in Figure S2 of the supporting information. Figures 2 shows the formation energies of the intrinsic defects in BaZrS_3 for both S-poor and S-rich conditions. The defect charge-state transition levels are plotted in Figure 3. We find that a series of shallow donor defects can form in BaZrS_3. The V_S is the dominant donor, giving rise to two donor levels (+/0) and (2+/+) that are almost in resonance with the conduction band. Under S-poor conditions (Figure 2a), the V_S has fairly low formation energy and thus exists in high concentration. On the other hand, the acceptor defects, mainly V_Ba and Ba_Zr, have high formation energies. These indicate that S-poor BaZrS_3 will be heavily n-type doped by the V_S donors. Under S-rich conditions (Figure 2b), the formation energy of V_S is increased, while the formation energies of the acceptor defects are reduced. Under those conditions, the equilibrium Fermi level would be pinned close to the intersection of the formation energies of V_S and V_Zr (about 0.5 eV below the conduction band), indicative of a very weak n-type, almost intrinsic BaZrS_3. Our results explain the experimental observation that as-grown BaZrS_3 is intrinsically n-type with the electron concentration as high as 10^19–10^20 cm^-3,<cit.> and we attribute this doping to sulfur vacancies. Figure 2 also indicates that it will be very difficult to achieve p-type BaZrS_3. While V_Ba, V_Zr, and Ba_Zr are shallow acceptors, they are strongly compensated by the V_S donors. Even under the most favorable S-rich conditions, Fermi-level pinning energy for p-type doping<cit.> is 0.6 eV above the VBM, caused by the V_S whose formation energy drops first to zero when the Fermi level is approaching the VBM (Figure 2b). In view of the high p-type pinning limit, any extrinsic shallow acceptors will be strongly compensated, thus preventing p-type doping. In the literature, p-type BaZrS_3 has only been reported once with hole concentration of ∼10^18 cm^-3.<cit.> We note that this was achieved in a sample which is extremely Ba-deficient (Ba/Zr ratio as low as ∼0.6), raising questions about possible secondary phases. Next to doping, from Figures 2 and 3 we identify a few defects with deep transition levels, including V_Zr (3-/4-), Ba_Zr (-/2-), S_Ba (+/3-), S_Zr (0/2-), and S_i (0/2+). Only S_i has sufficiently low formation energy to exist in significant concentration (when in S-rich conditions; see Table S4 for the calculated defect concentrations). Since defect-assisted nonradiative recombination is one of the key processes that limit carrier lifetime and ultimately solar cell performance,<cit.> it is necessary to assess whether the S_i is an efficient nonradiative recombination center. §.§ Nonradiative carrier capture by sulfur interstitials We now compute the nonradiative capture coefficients of the S_i. For nonradiative capture by the S_i, the relevant charge-state transitions are (2+/+) and (+/0), which are located at 0.35 and 1.37 eV above the VBM, respectively, as shown in Figure 4a. There are two capture processes associated with the (+/0) level: C_p^0 for hole (p) capture and C_n^+ for electron (n) capture, where the superscript denotes the initial charge state.<cit.> Similarly, nonradiative recombination via the (2+/+) level involves two capture processes: C_p^+ and C_n^2+. To illustrate the capture processes, Figure 4b shows the local atomic structures of the S_i in the three charge states: 0, +, and 2+. The local structures of S^0_i and S^+_i are similar but differ from that of S^2+_i. In the 0 and +1 charge states, the interstitial S forms distorted tetrahedral bonds with two Ba, one Zr, and one S nearest neighbors. With the S^0_i → S^+_i transition (i.e., capturing a hole, C_p^0), another lattice S atom moves towards the interstitial S. This lattice S atom and the interstitial S move further closer with the S^+_i → S^2+_i transition (i.e., capturing another hole, C_p^+). As a result, the S^2+_i forms a S trimer. The configuration coordinate diagrams (CCDs) for the S_i (+/0) and (2+/+) transitions are shown in Figures 4c and 4d. Such diagrams map the potential energy surfaces (PESs) of a defect in two adjacent charge states for a given transition as a function of a generalized configuration coordinate (Q).<cit.> We find that the Q displacement is larger for the (2+/+) transition than for the (+/0) transition, reflecting the structural differences discussed above. The CCDs indicate anharmonic atomic vibrations in the S_i capture processes, which are pronounced for those associated with the (2+/+) transition; see a comparison between the anharmonic and harmonic PESs in Figures 4c, 4d and S3. Anharmonicity in the CCDs was widely found for nonradiative carrier capture in halide perovskites and other low-symmetry semiconductors.<cit.> The anharmonicity reduces the electron capture barrier of S^2+_i but increases the electron capture barrier of S^+_i, compared to capture barriers in the harmonic approximation. Both S^0_i and S^+_i have a negligibly small hole capture barrier. From the capture barriers, the S_i is expected to be an efficient nonradiative recombination center. Figure 4e shows the calculated four capture coefficients versus temperature. The results suggest fast electron capture by S^2+_i with C_n^2+ of 6.6×10^-6 cm^3/s at room temperature and slow electron capture by S^+_i with C_n^+ of 1.95×10^-10 cm^3/s. The hole capture by S^0_i is fast with a room-temperature C_p^0 of 1.97×10^-7 cm^3/s, while it is slow for S^+_i with C_p^+ of 1.05×10^-9 cm^3/s. The latter is due to relatively small vibrational overlap between the PESs for the S^+_i → S^2+_i transition. In low-doped or intrinsic BaZrS_3, by balancing electron and hole capture under steady-state conditions, the total capture coefficient (C_tot) is given by<cit.> C_tot = C_n^+ + C_p^+/1+C_n^+/C_p^0+C_p^+/C_n^2+. At room temperature, the C_tot is calculated to be 1.25×10^-9 cm^3/s. This is a moderate value, limited by the relatively slow hole and electron capture by S^+_i. It is slightly smaller than the value (7×10^-9 cm^3/s) for the dominant recombination centers (iodine interstitials, I_i) in CH_3NH_3PbI_3 computed in a similar theoretical framework.<cit.> §.§ Discussion BaZrS_3 shows higher electron mobility than hole mobility which would suggest using this material as a p-type absorber layer as it is the diffusion length of minority carriers that mainly controls the solar cell efficiency.<cit.> Our analysis however shows that it will be unlikely to make p-type BaZrS_3. Using the intrinsically n-type doped BaZrS_3 as an absorber layer will lead to smaller minority-carrier diffusion lengths limited by the lower hole mobility and also cause issues at the device level as discussed for other n-type absorbers <cit.>. We thus suggest it could be more viable to devise a p-i-n (or n-i-p) cell using BaZrS_3 as the intrinsic layer (lightly n-type doped), as in halide perovskite solar cells.<cit.> In such a p-i-n device, the carrier diffusion length is controlled by the ambipolar mobility (μ_a) which is estimated to be 17 cm^2/Vs using our calculated intrinsic electron and hole mobilities.<cit.> Preparing intrinsic BaZrS_3 requires to reduce dramatically the concentration of V_S donors which can be achieved using S-rich growth conditions. However, S-rich conditions would enhance the formation of the nonradiative recombination centers, the S_i. It might then be beneficial to reduce electron concentration by introducing an extrinsic shallow acceptor while keeping a low sulfur chemical potential. We estimate now a realistic upper bound of the S_i density in high-temperature synthesized BaZrS_3 samples. Under S-rich conditions and assuming 1000 K growth of BaZrS_3 and rapid quenching to room temperature, the S_i density is estimated to be on the order of 10^17 cm^-3 (see Table S4). This high S_i density leads to a nonradiative recombination coefficient (A) on the order of 10^8 s^-1 at room temperature; here A is defined as A=N_dC_tot, where N_d is the defect density.<cit.> As a result, the nonradiative lifetime (τ=1/A) is on the order of 10 ns for S-rich conditions. Moving to less S-rich conditions will reduce the S_i density and increase the carrier lifetime. Our results are in reasonable quantitative agreement with the carrier lifetime on the order of 50 ns measured by TRPL on BaZrS_3 single-crystal samples.<cit.> In comparison, the I_i in CH_3NH_3PbI_3 has been found to lead to a much longer nonradiative lifetime, on the order of 100 ns,<cit.> based on the fact that the deep-level trap density in solution-processed CH_3NH_3PbI_3 samples is on the order of 10^15 cm^-3.<cit.> Ye et al. estimated the solar cell figure of merit (F_PV) of BaZrS_3 based on experimental data and using F_PV=α * L_D, where α is the optical absorption coefficient and L_D=√(μ_a k_BT/eτ) the carrier diffusion length.<cit.> They found a F_PV value of 2.1 using an absorption coefficient of 4940 cm^-1, nonradiative lifetime of 50 ns, and mobility of 146.2 cm^2/Vs.<cit.> Our computational results do not disagree with the lifetime but raise strong doubts on the mobility value. Since our calculated mobility is an order of magnitude lower, we estimate the F_PV to be 0.33. Our results however indicate that if the S_i concentration is lowered or the interstitials are passivated in some way, higher carrier lifetime could be reached which will boost the figure of merit. In addition to intrinsic defects, we briefly mention that BaZrS_3 is tolerant to oxygen impurities which could be present in high concentration in the samples prepared by sulfurization of BaZrO_3 precursor.<cit.> We find that the oxygen-related point defects, including oxygen interstitial (O_i) and O substitution on the S site (O_S), are electrically inactive, i.e., they are stable in the neutral charge state for almost the entire range of Fermi levels (see Figure S4 of the supporting information), in agreement with previous experimental and theoretical studies.<cit.> § CONCLUSIONS We evaluated carrier transport and defect properties in the chalcogenide perovskite solar absorber BaZrS_3. Our results show that BaZrS_3 has a lower hole mobility than electron mobility (11 vs 37 cm^2/Vs). The mobility in this sulfide perovskite is lower and more asymmetric (in terms of hole versus electron mobilities) than in lead halide perovskites. Our defect computations indicate an intrinsic tendency for n-type doping due to the shallow donor V_S and that p-type doping is very unlikely to be achievable. We confirm that overall BaZrS_3 is a defect-tolerant absorber with few deep defects that could act as nonradiative recombination centers. The S_i is identified to be the most problematic deep center. Under S-rich conditions, the carrier capture by the S_i will lead to a carrier lifetime on the order of 10 ns. Our work strongly suggests that suppressing the formation of S_i is critical for BaZrS_3 to be a high-performance absorber. § SUPPORTING INFORMATION AVAILABLE Full description of the computational methods. Supplementary Tables S1–S5, Figures S1–S4, and Refs. S1–S29. § NOTES The authors declare no competing financial interest. § ACKNOWLEDGMENTS This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under award number DE-SC0023509. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under contract no. DE-AC02-05CH11231 using NERSC award BES-ERCAP0023830. A.P. acknowledges support from a Department of Education GAANN fellowship.
http://arxiv.org/abs/2405.09581v1
20240515004158
Self-Supervised Learning of Dynamic Planar Manipulation of Free-End Cables
[ "Jonathan Wang", "Huang Huang", "Vincent Lim", "Harry Zhang", "Jeffrey Ichnowski", "Daniel Seita", "Yunliang Chen", "Ken Goldberg" ]
cs.RO
[ "cs.RO" ]
Feature-based Federated Transfer Learning: Communication Efficiency, Robustness and Privacy Feng Wang, M. Cenk Gursoy and Senem Velipasalar The authors are with the Department of Electrical Engineering and Computer Science, Syracuse University, Syracuse, NY, 13244. E-mail: fwang26@syr.edu, mcgursoy@syr.edu, svelipas@syr.edu. The material in this paper has been presented in part at the IEEE Global Communications Conference (Globecom), Dec. 2022. May 20, 2024 ============================================================================================================================================================================================================================================================================================================================================================================ empty empty Dynamic manipulation of free-end cables has applications for cable management in homes, warehouses and manufacturing plants. We present a supervised learning approach for dynamic manipulation of free-end cables, focusing on the problem of getting the cable endpoint to a designated target position, which may lie outside the reachable workspace of the robot end effector. We present a simulator, tune it to closely match experiments with physical cables, and then collect training data for learning dynamic cable manipulation. We evaluate with 3 cables and a physical UR5 robot. Results over 32×5 trials on 3 cables suggest that a physical UR5 robot can attain a median error distance ranging from 22% to 35% of the cable length among cables, outperforming an analytic baseline by 21% and a Gaussian Process baseline by 7% with lower interquartile range (IQR). Supplementary material is available at <https://tinyurl.com/dyncable>. § INTRODUCTION Dynamic free-end cable manipulation is useful in a wide variety of cable management settings in homes and warehouses. For example, in a decluttering setting with cables, humans may dynamically manipulate a cable to a distant location to efficiently move it out of the way, cast it towards a human partner, or hang it over a hook or beam. Dynamic cable manipulation may also be advantageous when a cable must be passed through an obstacle to be retrieved on another side, requiring precision in the location of the cable endpoint. Casting a free-end cable radially outward from a base and subsequently pulling the cable toward the base — which we refer to as “polar casting” and test in Section <ref> — may be limited in free-end reachability, motivating the use of high-speed, dynamic motions. We use the generic term cable to refer to a 1D deformable object with minimal stiffness, which includes ropes, wires, and threads. In this paper, we present a self-supervised learning procedure that, given the target location of a cable endpoint, generates a dynamic manipulation trajectory. Targets (r, θ) are defined in polar coordinates with radius r and angle θ with respect to the robot base. We focus on planar dynamic cable manipulation tasks in which the cable lies on a surface and (after a reset action) a robot must dynamically adjust the cable via planar motions. See Figure <ref> for an overview. In prior work <cit.>, we explored dynamic manipulation of fixed-end cables. In this work, we consider free-end cables, where the far end is unconstrained. We develop a simulation environment for learning dynamic manipulation of free-end cables in PyBullet <cit.>. We efficiently tune the simulator with differential tuning <cit.> and train policies in simulation to get cable endpoints to reach desired targets. We perform physical experiments evaluating dynamic cable manipulation policies on a UR5 robot. The contributions of this paper are: * Formulation of the dynamic planar manipulation of free-end cables problem. * A dynamic simulator in PyBullet that can be tuned to accurately model dynamic planar cable manipulation. * A dataset of 108,000 simulated trials and 1,120 physical trials with 3 cables and the UR5 robot. * Physical experiments with a UR5 robot and 3 cables suggesting the presented method can achieve median error distance less than 35% of the cable lengths. § RELATED WORK §.§ 1D Deformable Object Manipulation Manipulation of 1D deformable objects such as cables has a long history in robotics. Some representative applications include surgical suturing <cit.>, knot-tying <cit.>, untangling ropes <cit.>, deforming wires <cit.>, and inserting cables into obstacles <cit.>. We refer the reader to Sanchez et al. <cit.> for a survey. There has been a recent surge in using learning-based techniques for manipulating cables. A common setting is to employ pick-and-place actions with quasistatic dynamics so that the robot deforms the cable through a series of small adjustments, while allowing the cable to settle between actions. Under this paradigm, prior work has focused on cable manipulation using techniques such as learning from demonstrations <cit.> and self-supervised learning <cit.>. Prior work has also explored the use of quasistatic simulators <cit.> to train cable manipulation policies using reinforcement learning for eventual transfer to physical robots <cit.>. In this work, we develop a fully dynamic simulator for dynamic cable manipulation to accelerate data collection for training policies, with transfer to a physical UR5 robot. Other cable manipulation settings that do not necessarily assume quasistatic dynamics include sliding cables <cit.> or using tactile feedback <cit.> to inform policies. This paper focuses on dynamic planar actions to manipulate cables from one configuration to another. §.§ Dynamic Manipulation of Cables In dynamic manipulation settings, a robot takes actions rapidly to quickly move objects towards desired configurations, as exemplified by applications such as swinging items <cit.> or tossing objects to target bins <cit.>. As with robot manipulation, these approaches tend to focus on dynamic manipulation of rigid objects. A number of analytic physics models have been developed to describe the dynamics of moving cables, which can then be used for subsequent planning. For example, <cit.> present a two-dimensional dynamic casting model of fly fishing. They model the fly line as a long elastica and the fly rod as a flexible Euler-Bernoulli beam, and propose a system of differential equations to predict the movement of the fly line in space and time. In contrast to continuum models, <cit.> propose using a finite-element model to represent the fly line by a series of rigid cylinders that are connected by massless hinges. These works focus on developing mathematical models for cables, and do not test on physical robots. In the context of robotic dynamic-cable manipulation, Yamakawa et al. <cit.> demonstrate that a high-speed manipulator can effectively tie knots by dynamic snapping motions. They simplify the modelling of cable deformation by assuming each cable component follows the robot end-effector motion with constant time-delay. Kim and Shell <cit.> study a novel robot system with a flexible rope-like structure attached as a tail that can be used to strike objects of interest at high speed. This high-speed setting allows them to construct primitive motions that exploit the dynamics of the tail. They adopt the Rapidly-exploring Random Tree (RRT) <cit.> and a particle-based representation to address the uncertainty of the object’s state transition. In contrast to these works, we aim to control cables for tasks in which we may be unable to rely on these simplifying assumptions. Two closely related prior works are Zhang et al. <cit.> and Zimmermann et al. <cit.>. Zhang et al. <cit.> propose a self-supervised learning technique for dynamic manipulation of fixed-end cables for vaulting, knocking, and weaving tasks. They parameterize actions by computing a motion using a quadratic program and learning the apex point of the trajectory. In contrast to Zhang et al., we use free-end cables. Zimmermann et al. <cit.> also study the dynamic manipulation of deformable objects in the free-end setting. They model elastic objects using the finite-element method <cit.> and use optimal control techniques for trajectory optimization. They study the task of whipping, which aims to find a trajectory such that the free end of the cable hits a predefined target with maximum speed, and laying a cloth on the table. Their simulated motions perform well for simple dynamical systems such as a pendulum, while performance decreases for more complex systems like soft bodies due to the mismatch between simulation and the real world. We develop a learned, data-driven approach for robotic manipulation of free-end cables, focus on planar manipulation, and are interested in finding the best action such that the free end of the cable achieves a target position on the plane. § PROBLEM STATEMENT We consider a 6-DOF robot arm which controls a free-end cable with a small weight at the free end. The robot continually grips one end of the cable throughout each dynamic motion. We assume the cable endpoint lies on the plane and the robot end-effector takes actions at a fixed height slightly above the plane. The full configuration of the cable and the environment is infinite dimensional and complex to model, and we are interested in the cable endpoint. Thus, we concisely represent the state as =(r, θ), indicating the endpoint position on the plane in polar coordinates, with the robot base located at (0, 0). The objective is to produce a policy π that, when given the desired target ^(d)=(r_d, θ_d) for the endpoint, outputs a dynamic arm action =π(^(d)) that robustly achieves ^(d). For notational convenience, we define g_ p2c : ℝ^2 →ℝ^2 to be a function which converts a 2D point represented in polar coordinates to its 2D Cartesian coordinates. We also restrict targets (r_d, θ_d) such that they are reachable, that is, r_d ≤ r_ max + r_c, where r_ max is the maximal distance from the base to the robot wrist, and r_c is cable length. § LEARNING FRAMEWORK Variations in cable properties such as mass, stiffness, and friction, mean that a robot may require different trajectories to get endpoints to reach the same target. Directly estimating cable configurations after a motion is challenging due to complex dynamics and the difficulty of state estimation. In this work, we use the following pipeline for manipulating cable endpoints towards desired targets, summarized in Figure <ref>. To reduce uncertainty in state estimation, we start each motion from a consistent initial state. We do this with a reset procedure with high repeatabilty (Section <ref>). Next, for a given cable, we collect video observations of cable motions that start from the reset state (Section <ref>). We use this dataset to automate the tuning of a cable simulator to match the behavior of a physical cable (Section <ref>). From simulated data, we then train a neural network model that maps trajectories to predicted endpoints that the policy π plans through to output a trajectory that achieves the desired target (Section <ref>). §.§ Manual Reset and Repeat to Measure Repeatability A prerequisite for this problem is that actions corresponding to state transitions must be repeatable. We design and parameterize trajectories with the goal of maximizing repeatability, which we measure by executing the same multiple times, and recording the variance in the final endpoint location. To get the cable into a reset state, the robot performs a sequence of trajectories that reliably places the cable into a consistent state. We find an open-loop sequence that includes lifting the cable off a surface and letting it hang until it comes to rest. This may be part of a reliable reset method, as it undoes torsion consistently with sufficient time. For further details, see Section <ref> and Figure <ref>. §.§ Dataset Generation To facilitate training, we take inspiration from Hindsight Experience Replay (HER) <cit.> by executing an action , recording the resulting endpoint location (r_d, θ_d), and assuming that it was the intended target. We store each transition (, ) in the dataset. We create three datasets for the cable: 𝒟_ tune, real trajectories for tuning the simulator (Section <ref>), 𝒟_ sim, trajectories executed in the tuned simulator to train a forward dynamics model (Section <ref>), and 𝒟_ real, real trajectories used for fine-tuning the model trained on simulation examples. For notational convenience, we define a model f trained on 𝒟_ sim + 𝒟_ real as a model pre-trained on 𝒟_ sim and later fine-tuned with 𝒟_ real. As discussed in Section <ref>, 𝒟_ tune and 𝒟_ real are both from real trajectories, but may have been generated from different system parameters. §.§ Cable Simulator We create a dynamic simulation environment to efficiently collect data for policy training. Using the PyBullet <cit.> physics engine, we model cables as a string of capsule-shaped rigid bodies held together by 6-DOF spring constraints, as the adjustable angular stiffness values of these constraints can model twist and bend stiffness of cables. The environment also includes a flat plane as the worksurface, and a robot with the same IK as the physical robot. We tune 10 cable and worksurface parameters to best capture cable physics: twist stiffness, bend stiffness, mass, lateral friction, spinning friction, rolling friction, endpoint mass, linear damping, angular damping, and worksurface friction, and tune the parameters (Section <ref>). §.§ Training the Trajectory Generation Model We generate trajectories via a learned forward dynamics model. The policy π first samples multiple input actions to form a set 𝒜 of candidate actions. It then passes each action through the forward dynamics model f_ forw, which outputs the predicted endpoint location (x̂, ŷ) in Cartesian space. Given a target endpoint (in polar coordinates), the action = π() selected is the one minimizing the Euclidean distance between the predicted and target endpoints: π() = _∈𝒜 f_ forw() - g_ p2c() _2 where both f_ forw() and g_ p2c() are expressed in Cartesian (not polar) coordinates. We generate a dataset 𝒟_ sim for the cable via grid-sampled actions using the cable simulator with tuned parameters to match real, and then use it to train f_ forw. § EVALUATION We instantiate and evaluate the presented method from Section <ref> on a physical UR5 robot (Figure <ref>) with three cables of different properties (Figure <ref>). We use a table that is 2.75 wide and extends 1.50 from the base of the robot. Each cable has a weighted attachment, either a hook or a magnet to resemble a home or industrial setting in which the passing of the cable sets up a downstream task such as plugging into a mating part. The table is covered with foam boards for protection and to create a consistent friction coefficient. To facilitate data collection, we attach a green markers to the cables. We record observations from an overhead Logitech Brio 4K webcam recording 1920×1080 video at 60 frames per second. §.§ Reset We define the following reset procedure: * Lift the cable vertically up with the free-end touching the worksurface to prevent the cable from dangling. * Continue to lift the cable up such that the free-end loses contact with the worksurface. * Let the cable hang for 3 seconds to stabilize. * Perform polar casting to swing the free-end forward to land at (r_d, θ_d=0), then slowly pull the cable towards the robot until the reset position is reached. Figure <ref> shows a frame-by-frame overview of the reset procedure, which the UR5 robot executes before each dynamic planar cable manipulation action (see Section <ref>) to ensure the same initial cable configuration. §.§ Planar Action Variables For dynamic manipulation, the robot must take an action that induces the cable to swing to reach the target. We desire several action properties: (1) it should be a quick non-stop and smooth motion, (2) it should require a low-dimensional input that facilitates data generation and training, (3) it has a subset of parameters that can be fixed yet still allow the varying parameters to generate motions that will carry out the task, and (4) it should be interpretable. We hand craft a function to have these properties based on observing a human attempting the same task, and observe that most motions arc one direction, then arc back, before coming to a stop. To design the trajectory function, we define a coordinate frame and set of coordinates that the trajectory function uses to define an action (Figure <ref>). An offset r_0 from the reset position defines the origin of a polar coordinate frame. In this coordinate frame, we define 3 polar coordinates that define the motion. The motion starts at (r_0, θ_0) then interpolates through (r_1, θ_1) and comes to a rest at (r_2, θ_2). We use an inverse kinematic (IK) solver to compute the robot configuration through the motion. We observe that when polar coordinate origin aligns with the base of the robot, the angular coefficient corresponds 1:1 with the base rotation angle ϕ_1, and the IK solution for that joint is trivial. To increase the sweeping range of the robot, we also include a wrist joint rotation ϕ_3 in the trajectory function. The handcrafted trajectory function parameterizes each action by a fixed set of system parameters and a variable set of action variables. The system parameters are: the origin offset r_0 and the maximum linear velocity . Increasing r_0 creates a wider circle, thus flattening the arc. In Figure <ref>, the smaller value of r_0 results in the blue arc, while the larger value of r_0 results in the flatter green arc. Once we determine the system parameters (Section <ref>), they remain fixed, and only the action variables vary during training. We consider two sets of action variables, A_1 and A_2. The set A_1 consists of three values: = (θ_1, θ_2, r_2), while the set A_2 consists of four: = (θ_1, θ_2, r_2, ψ). The first three parameters correspond to coefficients of polar coordinates that define the motion (Fig. <ref>). The fourth is the wrist joint's rotation about the z-axis. We convert these coordinates to a trajectory in polar coordinates by using a cubic spline to smoothly interpolate the radial coefficient from r_0 to r_2, and by using a maximum-velocity spline to interpolate the angular coefficient from θ_0 to θ_1 and from θ_1 to θ_2, with maximum velocity . We implement the maximum-velocity spline with a jerk-limited bang-bang control, having observed that the UR5 has difficulty following trajectories with high jerk <cit.>. We assume a direction change between the two angular splines, and thus the angular velocity and acceleration at θ_1 is 0. Once we have the polar coordinate trajectory of the end effector, we convert it to a trajectory in joint space using an IK solver by discretizing the trajectory at the time interval of the robot's control cycle. We observe the coverage space for A_1 is limited in practice (Figure <ref>), and thus consider the action set A_2. When using A_2, the wrist rotates ψ - θ_2 radians with a maximum velocity spline rotation. In an ablation study in Section <ref>, we examine the impact of the wrist angle parameter. This parameterization allows us to take advantage of the symmetrical aspect of the problem. For all datasets, we sample actions such that θ_1 < 0, θ_2 > 0, and ψ≥θ_2, to obtain targets on the right of the workspace axis of symmetry. If (θ_1, θ_2, r_2, ψ) produces target (r_d, θ_d), (-θ_1, -θ_2, r_2, -ψ) will produce (r_d, -θ_d). Therefore, during evaluation, for targets (r_d, θ_d) on the left of the axis, we take π((r_d, -θ_d)) = (θ̂_1, θ̂_2, r̂_2, ψ̂) and output (-θ̂_1, -θ̂_2, r̂_2, -ψ̂). §.§ Self-Supervised Physical Data Collection To create 𝒟_ tune, we generate and execute physical trajectories (θ_1, θ_2, r_2, ψ), uniformly sampling each parameter such that the trajectory does not violate joint limits. The execution for each trajectory is about 20, including 15 for the reset motion and 5 for the planar motion, thus motivating the need for more efficient self-supervised data generation in simulation. Videos for each planar motion are recorded. We collect physical trajectories for tuning the simulator and fine tuning the model. §.§ Simulator Tuning Instead of using grid search or random search, we search for parameters using Differential Evolution (DE) <cit.>, optimizing for a similarity metric. DE does not require a differentiable optimization metric, and scales reasonably with a large number of parameters compared to grid search and Bayesian optimization <cit.>. We denote the collective set of PyBullet simulation parameters as θ_ sim. For each trajectory m_j in the training set 𝒟_ tune of M total physical trajectories, we record the 2D location of the cable endpoint, p_t = g_ p2c(_ real) = (x_t, y_t), at each time step t spaced 100 apart (see Figure <ref>(d)). The tuning objective is to find parameters θ_ sim that minimize the average L^2 distance between the cable endpoint in simulation, p̂_t = g_ p2c(_ sim) = (x̂_t, ŷ_t), to the real waypoint at each t: _θ_ sim ϵ_ trajs = 1/M∑_j^M1/N_j∑_i^N_j‖ p_i - p̂_i ‖_2. The number of waypoints compared for each trajectory is N_j = ⌊T_j/100⌋, where T_j is the duration of m_j in milliseconds. For each cable, we collect 60 physical trajectories and evaluate on an additional 20 trajectories where (θ_1, θ_2, r_2, ψ), for each trajectory are uniformly sampled. Increasing the number of trajectories to 80 leads to an increase in ϵ_ trajs by 2% - 12%, suggesting that additional trajectories would not improve tuning. Table <ref> compares similarity between cable behavior in simulation and real. For each cable, DE was able to reduce the discrepancy in the endpoint location throughout and after the trajectory, compared to PyBullet's default parameter settings. While 50% of trajectories have final L^2-distance less than 15%, evaluation results suggest a reality gap still remains for certain trajectories, motivating the use of fine-tuning as discussed in Section <ref>. §.§ System Parameter Selection To find the optimal (r_0, ) combination that maximizes endpoint coverage and repeatability, we search in simulation and real. For all grid samples, we filter actions that exceed joint or velocity limits. We use r_0 ∈{0.6, 1.0, 2.0 } and ∈{1.2,1.5,1.8} for 3×3=9 combinations. For each r_0 and pair, we generate actions =(θ_1,θ_2,r_2) via grid sampling. We generate 15×15×15 actions with 15 values for each parameter and execute them in simulation. We evaluate the endpoint coverage for each (r_0, ) combination by aggregating all resulting endpoints from these actions. Fig. <ref> compares endpoint coverage among different (r_0, ) combinations, overlaid on top of a semicircle with radius r_ max + r_c, representing the maximum possible coverage. Higher velocity actions tend to give higher coverage, but also higher uncertainty, resulting in less repeatability. To evaluate the repeatability, we execute each sampled action 5 times in the physical environment and calculate the standard deviation of the distance between the cable endpoint position to the mean position. For trials where the cable endpoint leaves the table (i.e., “off-table”), we remove these trials from the standard deviation calculation. To evaluate repeatability for a given r_0 and , we average the standard deviations across actions generated via grid sampling. We seek an r_0 and combination that maintains repeatability with high coverage. With the r_0 that maximizes coverage, we evaluate the effect of the additional wrist rotation parameter ψ on coverage and repeatability. For ∈{1.2,1.5,1.8}, we generate 15×15×15 actions for a=(θ_1,θ_2,r_2) and 10×10×10×5 actions for a=(θ_1,θ_2,r_2, ψ) and execute them in simulation to evaluate coverage. In real, we generate and execute 5×5×5 actions for a=(θ_1,θ_2,r_2) with ∈{1.2,1.5} and 3×3×3×3 actions for a=(θ_1,θ_2,r_2, ψ) with ∈{1.5,1.8} to evaluate repeatability. §.§ Training Data Collection and Model Training After selecting the most ideal system parameters for coverage and repeatability, we use actions generated with these fixed values to learn a forward dynamics model as in Section <ref>. To generate training data 𝒟_ sim, we grid sample θ_1, θ_2, r_2 and ψ, and generate 36,000 (, ) simulated transitions to train f_ forw to predict given . We additionally execute 200 grid-sampled actions and record the transitions in the physical setup to generate 𝒟_ real, which is used for fine-tuning the dynamics model with real data. We use a feedforward neural network for f_ forw, with 4 fully connected layers with 256 hidden units each. The model is trained to minimize the L^2 error on data from 𝒟_ sim: ∑_(, ) ∈𝒟_ sim‖ f_ forw() - g_ p2c() ‖_2. During evaluation, we sample 50,000 candidate actions using a grid sample on the joint-limit-abiding action variables, and select the action that minimizes the L^2 distance between the predicted and desired endpoints (see Equation <ref>). § RESULTS We extract the endpoint position of the cable from an overhead image after each executed action. For off-table cases and occluded case where the endpoint is occluded by the robot arm, we exclude them from the error analysis. We'll address the avoidance of these cases in the future work. §.§ Methods We benchmark performance of the forward dynamics model trained on (1) 𝒟_ real only, (2), 𝒟_ sim only, and (3) 𝒟_ sim first, then fine-tuned on 𝒟_ real. See Section <ref> for dataset details. Along with these three forward models, we test with two baseline methods, one analytic and one learned: Polar Casting. Given a target cable endpoint location (r_d, θ_d), first perform a casting motion that causes the free endpoint to land at (r_ max + r_c, θ). To perform this motion, the robot rotates θ_d radians, and executes a predefined casting motion. After the casting motion, the robot slowly pulls the cable toward the base for a distance of r_ max + r_c - r_d. Targets (r_d, θ_d) achievable by this method are limited to the circular segment seen in Figure <ref>. In Cartesian coordinates, the arm is limited to a minimum y coordinate of 0.466+r_c to prevent the end effector from hitting the robot's supporting table, where 0.466 is the distance from the center of the robot base to the edge of the table. Forward Dynamics Model learned with Gaussian Process. As a learning baseline, we use Gaussian Process regression, which is commonly used in motion planning and machine learning <cit.>. Using 𝒟_ sim, we train a Gaussian Process regressor to predict the cable endpoint location (x̂, ŷ) given input action . §.§ System Ablation Comparison We compare the performance of models across (r_0, ) values and perform an ablation for including ψ (Equation <ref>). §.§.§ Coverage From simulation, we find that r_0 = 0.6 and = 1.8 provide the maximal coverage of 66.3% of the reachable workspace. Comparing with other r_0 values with the same , r_0=0.6 tends to result in the maximum coverage, which is visualized in the top of Figure <ref>. We thus choose r_0=0.6. The bottom of Figure <ref> compares the coverage with and without ψ using r_0 = 0.6. The graph shows adding ψ notably increases coverage, confirming the hypothesis in Section <ref>. §.§.§ Repeatability The physical repeatability evaluation results are shown in Figure <ref>. We observe that = 1.2 produces the most repeatable actions with an average standard deviation of 1.27% (of cable length) across actions compared to 3.70% for = 1.8, but offers minimal coverage 21.3% of the reachable area in simulation. While =1.8 with ψ offers the most coverage 79.7%, it has the least repeatability with an average standard deviation of 4.72%. To balance coverage and repeatability, we select =1.5 with ψ, which has a coverage of 76.9% and an average standard deviation of 1.93%. Compared to trials using =1.8 with ψ, trials using = 1.5 with ψ also have 17.1% fewer off-table cases. §.§ Physical Evaluation Results We evaluate 3 dynamics models on Cable 1, which are trained on only 𝒟_ real, only 𝒟_ sim, and 𝒟_ sim with fine-tuning on 𝒟_ real (i.e., 𝒟_ sim + 𝒟_ real). We find that training on 𝒟_ sim and fine-tuning on 𝒟_ real yields the best performance. Table <ref> summarizes physical results. The best model produces actions that have a median error distance of 22% from the target. Compared to the baselines, the policy π (Equation <ref>) using the dynamics model has a lower median error distance to the target and lower IQR. Although polar casting is accurate for targets within its reachability, the limited coverage greatly increases its median distance. Training f_ forw on only 𝒟_ real has a much higher median endpoint deviation distance, suggesting that it overfits to 𝒟_ real due to its small size, motivating the need to pre-train f_ forw with 𝒟_ sim. The performance improvement after fine-tuning on 𝒟_ real suggests real examples are useful for closing the sim-to-real gap. We evaluate the performance for all 32 target positions for all cables using f_ forw trained with 𝒟_ sim + 𝒟_ real, and repeat each output action 5 times per target. The 95% confidence ellipses in Figure <ref> show the underlying dynamic noise of the system and the distance between the target and the center of the ellipse shown by the dashed line represents the model error. Results are summarized in Table <ref>. We attribute the error to two sources: simulator error from examples in 𝒟_ sim that do not represent real cable behavior, and learning error from the forward dynamic model. Table <ref> shows the simulator tuning error, but while Cable 3 exhibits the smallest sim-to-real gap, its physical performance is the worst among all 3. We conjecture that the circular magnet at the end of Cable 3 increases stochasticity. In experiments, the magnet frequently rotates after the rest of the cable stabilizes, moving the endpoint further from the target. In future work, we will investigate whether using a different optimization strategy for tuning will mitigate this problem. While we model the extra mass at the endpoint, we may consider modeling irregular shapes to reduce the sim-to-real gap. § CONCLUSION AND FUTURE WORK In this paper, we present a self-supervised learning procedure for learning dynamic planar manipulation of 3 free-end cables. Experiments suggest that tuning a simulator to match the physics of real cables, then training models from data generated in simulation, results in promising physical manipulation to get cable endpoints towards desired targets. The self-supervised fine-tuning process relies on executing hundreds of trajectories in the physical setup, which may be infeasible in other dynamic manipulation domains. Hence, we will explore dynamics randomization <cit.>. The presented method also relies on learning a separate set of simulation parameters for each cable, motivating meta-learning <cit.> approaches to rapidly adapt to other cables. § ACKNOWLEDGMENTS This research was performed at the AUTOLAB at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab, the CITRIS “People and Robots” (CPAR) Initiative, and the Real-Time Intelligent Secure Execution (RISE) Lab. The authors were supported in part by donations from Toyota Research Institute and by equipment grants from PhotoNeo, NVidia, and Intuitive Surgical. Daniel Seita is supported by a Graduate Fellowship for STEM Diversity (GFSD). We thank our colleagues Ashwin Balakrishna, Ellen Novoseller, and Brijen Thananjeyan. IEEEtranS
http://arxiv.org/abs/2405.09912v1
20240516090209
Driver at 10 MJ and 1 shot per 30 minutes for inertial confinement fusion at high gain: efficient, compact, low laser-plasma instabilities, multi-color, low-cost, applicable to multiple fusion schemes
[ "Zhan Sui", "Ke Lan" ]
physics.plasm-ph
[ "physics.plasm-ph", "hep-ex" ]
GBgbsn lqling@vip.163.com Shanghai Institute of Laser Plasma, China Academy of Engineering Physics, Shanghai 201800, China Institute of Applied Physics and Computational Mathematics, Beijing 100094, China The ignition at the National Ignition Facility (NIF) set off a global wave of research on the inertial fusion energy (IFE). However, IFE requires a necessary target gain G of 30-100, while it is hard to achieve the fusions at such high gain with the energy, configuration, and technical route of the NIF. We will present a conceptual design for the next generation laser driver of 10 MJ, 2 ∼ 3 PW at 3ω (or 2ω, then the energy and power can be higher), and 1 shot/30min, which is efficient, compact, low-cost, low laser-plasma instabilities, applicable to multiple laser fusion schemes, and aiming for G > 30. Driver at 10 MJ and 1 shot per 30 minutes for inertial confinement fusion at high gain: efficient, compact, low laser-plasma instabilities, multi-color, low-cost, applicable to multiple laser fusion schemes Ke Lan () May 20, 2024 ============================================================================================================================================================================================================== § INTRODUCTION Laser driven inertial fusion is a highly promising approach toward fusion energy, which has been a quest of human beings for more than a half century. The National Ignition Facility (NIF) at Lawrence Livermore National Laboratory achieved target gain G > 1 in the end of 2022 <cit.> and set a new fusion yield record of 5.2 MJ in February, 2024 <cit.>, which successfully demonstrated the feasibility of laboratory laser fusion. However, it still far below the target gain G of 30 to 100 required by the inertial fusion energy (IFE) <cit.>. In fact, limited by its configuration <cit.>, energy <cit.>, and the technical route identified in the 1980s <cit.>, it is hard for the NIF to reach such high gain. Hence, a driver with novel laser technologies is mandatory in the path toward IFE. In this paper, we will give a conceptual design for an efficient and low-cost laser driver of 1 shot/30min, aiming for G > 30. A schematic is shown in Fig. <ref>, and the main technical specifications are shown in Table 1. In the following, we will address the novel technologies to improve the energy storage efficiency, extraction efficiency, flux, volume, etc. for this driver. § KEY TECHNOLOGIES 1. Multi-front-end technology Laser-plasma instabilities (LPIs) is one of the main obstacles of the NIF to achieve high gain. In our novel design, each laser beam has an independent front-end, i.e. one distributed feedback (DFB) oscillator per beam line. Then, the regulation freedom can be improved because each laser beam has no fixed phase relationship, and therefore, the incoherent superposition can be realized at the target point. This is conducive to the beam smoothing and can effectively suppress LPIs. Moreover, a super light spring of incoherence in all dimensions of time, space, and angle can be used to further suppress LPIs <cit.>. Thus, it is possible for the driver to work also at a frequency lower than 3ω, for a higher energy conversion efficiency and a higher damage threshold for the optic components in the final optic assembly. 2. Near-field spatial separation amplification of pre- and main pulse In the ignition target designs, the laser pulse consists of pre-pulse and main pulse, and the power of pre-pulse is usually much lower than that of the main pulse. This makes the conversion efficiency of the pre-pulse part very low during the frequency conversion process, leading to an overall frequency conversion efficiency of the ignition pulse about 30% lower than that of the square-wave pulse <cit.>. In our novel design, the pre- pulse and main pulse are spatially separated in the amplifier by adopting the spatiotemporal shaping. Furthermore, the pre-pulse corresponds to a smaller aperture, while the main pulse corresponds to the rest. This can be used to adjust the power densities of the pre-pulse and the main pulse to be equivalent, so as to improve the overall conversion efficiency. The superposition effect of pre-pulses and main pulses at the target point can be the same as that of the traditional amplification method. 3. High fluence amplification employed laser material with low emission cross section, long fluorescence lifetime, and high energy storage Most of existing laser drivers use neodymium glass as the gain medium, which has a large radiative transition probability and a high gain due to its short fluorescence lifetime and large emission cross section, however, it has a relatively small energy storage. In our novel design, the driver uses the new sensitizing doped laser material, which has a fluorescence lifetime ( ∼650 μs), low emission cross section (∼ 2 × 10 ^-20 cm^2) and high stored energy (0.5 J/cm^3) as the main amplifier gain material. In addition, a high-flux and multi-pass amplification is used to effectively extract the stored energy. This type of amplifier is expected to achieve a high flux output of 30 J/cm^2. 4. Water-cooled xenon lamp with annular section and fluorescent conversion separator material As known, the operational efficiency of high-power laser devices is relatively low, only about 3 - 4 shots per day. In the novel design, we use a xenon lamp with an annular section as the pump, which fluorescence can be converted into an absorption band and absorbed by the laser medium by using a fluorescence conversion baffle material. Hence, the pumping light-laser conversion efficiency can be remarkably improved. Meanwhile, the xenon lamp is subjected to water cooling, so that the heat can be taken away in time and the laser medium can be prevented from heating up continuously. As a result, the emission interval can be effectively shortened, and a shooting capability of 30 minutes per emission can be expected. This technology can improve the radiation efficiency of the large diameter xenon lamp and the absorption efficiency of gain medium, leading to a much higher energy storage efficiency of disk amplifier and a much shorter emission interval. 5. Near-field multi-pass split-plate amplifier based on angle-sensitive film The energy extraction efficiency is crucial important for a driver. In order to improve the extraction efficiency, we design a near-field multi-pass amplifier. The amplifiers have a certain splitting angle, and the film layers of surfaces are designed to be angle-sensitive. By this way, the pulse has a high reflectivity on the surfaces in the first two paths of transmission, and the angle corresponding to the third path has a high transmissivity <cit.>. Thus, the actual transmission distance of the pulse passing through the amplification medium is three times the thickness of the medium. In other words, it is a three-pass amplification, in which the pulse has nearly equal fluence in one amplifier medium while the integral fluence is three times enhanced. This amplification mode has a simple system structure but a high extraction efficiency. 6. Two-pass amplification configuration combined with near-field three-pass amplification Combined with a near-field three-pass amplification, the main amplifier is equipped with a two-pass amplification configuration, and the amplification of 48 equivalent pieces can be realized by adopting 8 pieces of gain media. This system has a compact structure (with a length about 20 m), a large integral flux, a balanced flux of each piece of gain medium, and a high extraction efficiency, which can meet the image transmission of the system when combined with the traditional spatial filter. 7. Spatial filtering technology based on angular spectrum sensitive nonlinear crystal In current laser technology, it takes the low-gain high-fluency amplification. In this case, the B integral of the system is severe, and a spatial filtering is required to suppress the increase of B integral <cit.>. However, the traditional spatial filters generally have a long size and require a vacuum assembly, which cost is expensive. In our design, we use the nonlinear near-field filtering method in the designed single or cascaded nonlinear crystals, then the spatial high-frequency components of pulses can be converted to the second-harmonic wave and filtered in the following amplification processes. In addition, the low-frequency components of pulses have a low conversion efficiency, so the spatial filtering can be realized. Especially, the nonlinear spatial filtering can be conveniently inserted into the amplifier, resulting in a more compact structure of the laser driver with significant space and cost savings. 8. Beam combination system based on the non-collinear nonlinear frequency conversion As known, the beam number of laser driver is restricted because the solid angle of the target chamber is fixed. In our design, the two fundamental- frequency beams of light are combined into one beam at third-harmonic frequency, through non-collinear second-harmonic and sum-frequency conversion. Furthermore, the beam number can be doubled by using the non-collinear beam combination scheme, and the output pulse can be ensured to have sufficient brightness with a small F number under the condition that the load capacity of the device is constant. 9. New Measurement and Control Technology Compared with the current advanced control technologies, the existing control system used in the laser drivers lags far behind. In our design, it will employ the centralized control system based on global wireless Internet of Things (IOT), together with a high-resolution and high-stability time synchronization system based on high- speed communication technology, and a full-system control based on artificial intelligence (AI) and miniaturized measurement package. 10. General layout: semi-underground centered on the target chamber According to our above technical schemes, the space size of the driver can be greatly reduced, and also the underground layout becomes possible. The general layout is semi- underground centered on the target chamber, which has six laser shooting ends <cit.>, with 16 beamlets per end and 4 beams per beamlet, guided into the target chamber, as shown in Fig. <ref>. It can reduce the cost of construction and environmental control. § SUMMARY We have presented a conceptual design for the next generation laser driver of 10 MJ, 2 ∼ 3 PW and 1 shot/30min, with the ideal laser arrangement of the octahedral spherical hohlraum approach, which is applicable to multiple schemes <cit.> for IFE route choice and feasibility demonstration. Under a 10 MJ drive, it can be expected to achieve G ∼ 30 by assuming a burn depletion <cit.> Φ ∼ 45% of a 2 mg DT fuel by indirect-drive, and G ∼ 100 by assuming Φ ∼ 30% of a 10 mg DT fuel by direct-drive. Here, we consider a lower Φ for direct-drive due to its worse asymmetry aroused by the laser imprinting and beam crossing. Detail designs of such targets are to be presented in our forthcoming publications. As in Table I, the novel driver has multi-wavelength (color), which means that the laser beams can simultaneously and independently work at either 2ω, 3ω or 4ω for various physics purposes <cit.>. This is convenient to be achieved via nonlinear frequency conversion process by removing or changing the crystals. Compared with the prior art, our novel laser driver design can remarkably improve the energy storage efficiency, extraction efficiency, flux, repetition, and meanwhile, greatly reduce its volume, metal components and cost. ACKNOWLEDGMENTS The authors appreciate Dr. Xiaohui Zhao of Shanghai Institute of Laser Plasma, Dr. Yongsheng Li, Dr. Hui Cao, Dr. Yaohua Chen, and Dr. Xiumei Qiao of Institute of Applied Physics and Computational Mathematics in Beijing for their beneficial helps. This work is supported by the National Natural Science Foundation of China (Grant No. 12035002). 00 Abu-Shawareb2024PRL H. Abu-Shawareb et al., “Achievement of Target Gain Larger than Unity in an Inertial Fusion Experiment”, Physical Review Letters 132, 065102 (2024). 5dot2MJyield https://www.energy.gov/cfo/articles/fy-2025-budget-justification MTV S. Atzeni and J. Meyer-ter-Vehn, The Physics of Inertial Fusion: Beam Plasma Interaction, Hydrodynamics, Dense Plasma Physics (Clarendon Press, Oxford, 2004); “Report of the Fusion Energy Sciences Workshop on Inertial Fusion Energy”, U. S. Department of Energy, (2023). Lan2022MRE K. Lan, “Dream fusion in octahedral spherical hohlraum”, Matter Radiat. Extremes 7, 055701 (2022). NNSA2015 National Nuclear Security Administration, “2015 Review of the Inertial Confinement Fusion and High Energy Density Science Portfolio: Volume I", DOE/NA-0040; Lawrence Livermore National Laboratory, “Lasers indirect drive input to NNSA 2020 report", LLNL-TR-810573. Haynam2007 C. A. Haynam, P J Wegner, J M Auerbach, et al. “National ignition facility laser performance status”, Applied Optics 46.16, 3276-3233 (2007). Manes2016 K.R.Manes et al., “Damage mechanisms avoided or managed for NIF large optics”, Fusion Science and Technology 69.1, 146-249 (2016). incoherent2023MRE Y. Guo et al., “Suppression of stimulated Raman scattering by angularly incoherent light, towards a laser system of incoherence in all dimensions of time, space, and angle”, Matter Radiat. Extremes 8, 035902 (2023). Spaeth2016 M. L. Spaeth et al., “National ignition facility laser system performance”, Fusion Science and Technology 69.1, 366-394 (2016). Lan2014POPK. Lan et al., “High flux symmetry of the spherical hohlraum with octahedral 6LEHs at the hohlraum to capsule radius ratio of 5.14”, Phys. Plasmas 21, 010704 (2014); “Octahedral spherical hohlraum and its laser arrangement for inertial fusion”, Phys. Plasmas 21, 052704 (2014); “Novel spherical hohlraum with cylindrical laser entrance holes and shields”, Phys. Plasmas 21, 090704 (2014). Craxton2020DPP S. Craxton, W. Y. Wang, and E. M. Campbell, “A new beam configuration to support both spherical hohlraums and symmetric direct drive”. The 62nd Annual Meeting of the American Physical Society Division of Plasma Physics, November 9-13, 2020 in U.S.A. Marangola2021 M. Marangola, “Optimization of Direct Drive Designs for a Proposed Dual Direct/Indirect Drive Laser”, In: LLE Summer High School Research Program (2021). URL: https://www.lle.rochester.edu/media/publications/ high_school_reports/documents/hs_reports/2021/ Marangola_Meghan.pdf. Lan2017POP K. Lan and P. Song, “Foam Au driven by 4ω-2ω ignition laser pulse for inertial confinement fusion”, Phys. Plasmas 24, 052707 (2017). CYH2018POP Y. Chen, K. Lan, W. Zheng, and E. M. Campbell, “High coupling efficiency of foam spherical hohlraum driven by 2ω laser light”, Phys. Plasmas 25, 022702 (2018).
http://arxiv.org/abs/2405.10082v1
20240516132536
An Integrated Framework for Multi-Granular Explanation of Video Summarization
[ "Konstantinos Tsigos", "Evlampios Apostolidis", "Vasileios Mezaris" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Continuous Transfer Learning for UAV Communication-aware Trajectory Design Chenrui Sun1, Gianluca Fontanesi2, Swarna Bindu Chetty1, Xuanyu Liang1, Berk Canberk3 and Hamed Ahmadi1 1School of Physics Engineering and Technology, University of York, United Kingdom 2Interdisciplinary Centre for Security, Reliability, and Trust (SnT), Luxembourg 3, Edinbrough Napier University, Edinbrough, United Kingdom Received April 30, 2024; accepted Month Date, Year ================================================================================================================================================================================================================================================================================================================================================== In this paper, we propose an integrated framework for multi-granular explanation of video summarization. This framework integrates methods for producing explanations both at the fragment level (indicating which video fragments influenced the most the decisions of the summarizer) and the more fine-grained visual object level (highlighting which visual objects were the most influential for the summarizer). To build this framework, we extend our previous work on this field, by investigating the use of a model-agnostic, perturbation-based approach for fragment-level explanation of the video summarization results, and introducing a new method that combines the results of video panoptic segmentation with an adaptation of a perturbation-based explanation approach to produce object-level explanations. The performance of the developed framework is evaluated using a state-of-the-art summarization method and two datasets for benchmarking video summarization. The findings of the conducted quantitative and qualitative evaluations demonstrate the ability of our framework to spot the most and least influential fragments and visual objects of the video for the summarizer, and to provide a comprehensive set of visual-based explanations about the output of the summarization process. § INTRODUCTION The current practice in the Media industry for producing a video summary requires a professional video editor to watch the entire content and decide about the parts of it that should be included in the summary. This is a laborious task and can be very time-consuming in the case of long videos or when different summaries of the same video should be prepared for distribution via multiple video sharing platforms (e.g., YouTube, Vimeo, TikTok) and social networks (e.g., Facebook, Twitter, Instagram) with different specifications about the optimal or maximum video duration <cit.>. Video summarization technologies aim to generate a short summary by selecting the most informative and important frames (key-frames) or fragments (key-fragments) of the full-length video, and presenting them in temporally-ordered fashion. The use of such technologies by Media organizations can drastically reduce the needed resources for video summarization in terms of both time and human effort, and facilitate indexing, browsing, retrieval and promotion of their media assets <cit.>. Despite the recent advances in the field of video summarization, which are tightly associated with the emergence of modern deep learning network architectures <cit.>, the outcome of a video summarization method still needs to be curated by a video editor, to make sure that all the necessary parts of the video were included in the summary. This content production step could be further facilitated if the video editor is provided with explanations about the suggestions made by the used video summarization technology. The provision of such explanations would allow a level of understanding about the functionality of this technology, thus increasing the editor's trust in it and facilitating content curation. Over the last years there is an increasing interest in explaining the outcomes of deep networks processing video data. Nevertheless, most works are related with network architectures for video classification <cit.>, action classification and reasoning <cit.>, activity recognition <cit.> and anomaly detection <cit.>. With respect to explainable video summarization, a first attempt to formulate the task and evaluate various attention-based explanation signals was initially reported in <cit.> and extended in <cit.>. Another approach that relies on the use of causality graphs between input data, output scores, summarization criteria and data perturbations, was presented in <cit.>. However, the produced graphs require interpretation by a human expert, while the performance of these explanations was not evaluated through quantitative or qualitative analysis. In this paper, we build on our previous efforts on explainable video summarization <cit.> and extend them, in order to: i) investigate the use of a model-agnostic approach (adaptation of the LIME method <cit.>) for fragment-level explanation of the video summarization results, ii) develop a new method for producing more fine-grained explanations at the visual object level that provide more insights about the focus of the summarizer, and iii) build an integrated framework for multi-granular (and thus more informative) explanation of the video summarization results. Our contributions are the following: * We adapt the model-agnostic LIME explanation method <cit.> to operate on sequences of video frames (rather than on a single frame/image, which is the typical approach) and produce a fragment-level explanation of the video summarization results, which indicates the temporal fragments of the video that influenced the most the decisions of the summarizer. * We combine the state-of-the-art Video K-Net method for video panoptic segmentation <cit.> with another adaptation of the LIME method <cit.> that also operates on frame sequences, to build a method that performs object-oriented perturbations over a sequence of frames and produces explanations at the level of visual objects. * We integrate the methods for fragment- and object-level explanation into a framework for multi-granular explanation of video summarization, and assess their performance based on quantitative and qualitative evaluations using a state-of-the-art method (CA-SUM <cit.>) and two datasets for video summarization (SumMe <cit.> and TVSum <cit.>). § RELATED WORK Over the last years there is a rapidly growing interest of researchers on building methods that provide explanations about the working mechanism or the decisions/predictions of neural networks. Nevertheless, in contrast to the notable progress in the fields of pattern recognition <cit.>, image classification <cit.>, and NLP <cit.>, currently there are only a few works on producing explanations for networks that process video data (listed in Table <ref>). Working with network architectures for video classification, Bargal et al. (2018) <cit.> visualized the spatio-temporal cues contributing to the network’s classification/captioning output using internal representations and employed these cues to localize video fragments corresponding to a specific action or phrase from the caption. Mänttäri et al. (2020) <cit.> utilized the concept of meaningful perturbation to spot the video fragment with the greatest impact on the video classification results. Li et al. (2021) <cit.> extended a generic perturbation-based explanation method for video classification networks by introducing a loss function that constraints the smoothness of explanations in both spatial and temporal dimensions. Focusing on methods for action classification and reasoning, Stergiou et al. (2019) <cit.> proposed the use of cylindrical heat-maps to visualize the focus of attention at a frame basis and form explanations of deep networks for action classification and recognition. Zhuo et al. (2019) <cit.> defined a spatio-temporal graph of semantic-level video states (representing associated objects, attributes and relationships) and applied state transition analysis for video action reasoning. Han et al. (2022) <cit.> presented a one-shot target-aware tracking strategy to estimate the relevance between objects across the temporal dimension and form a scene graph for each frame, and used the generated video graph (after applying a smoothing mechanism) for explainable action reasoning. Dealing with networks for video activity recognition, Aakur et al. (2018) <cit.> formulated connected structures of the detected visual concepts in the video (e.g., objects and actions) and utilized these structures to produce semantically coherent and explainable representations for video activity interpretation, while Roy et al. (2019) <cit.> fed the output of a model for activity recognition to a tractable interpretable probabilistic graphical model and performed joint learning over the two. In the field of video anomaly detection, Wu et al. (2022) <cit.> extracted high-level concept and context features for training a denoising autoencoder that was used for explaining the output of anomaly detection in surveillance videos. Guo et al. (2022) <cit.> constructed a sequence-to-sequence model (based on a variational autoencoder) to detect anomalies in videos and combined it with a visualization tool that facilitates comparisons between normal and abnormal sequences in a latent space. Szymanowicz et al. (2022) <cit.> designed an encoder-decoder architecture to detect anomalies, that is based on U-Net <cit.>, thereby generating saliency maps by computing per-pixel differences between actual and predicted frames; based on the per-pixel squared errors in the saliency maps, they introduced an explanation module that can provide spatial location and human-understandable representation of the identified anomalous event. Hinami et al. (2017) <cit.> employed a Fast R-CNN-based model to learn multiple concepts in videos and extract semantic features, and applied a context-sensitive anomaly detector to obtain semantic anomaly scores which can be seen as explanations for anomalies. Singh et al. (2023) <cit.> developed an explainable method for single-scene video anomaly localization, which uses learned representations of the depicted objects and their motions to provide justifications on why a part of the video was classified as normal or anomalous. Working with network architectures that tackle other video analysis and understanding tasks, Papoutsakis et al. (2019) <cit.> presented an unsupervised method that evaluates the similarity of two videos based on action graphs representing the detected objects and their behavior, and provides explanations about the outcome of this evaluation. Gkalelis et al. (2022) <cit.> used the weighted in-degrees of graph attention networks' adjacency matrices to provide explanations of video event recognition, in terms of salient objects and frames. Yu et al. (2021) <cit.> built an end-to-end trainable and interpretable framework for video text detection with online tracking that captures spatial and motion information and uses an appearance-geometry descriptor to generate robust representations of text instances. Finally, a few attempts were made towards explaining video summarization. Apostolidis et al. (2022, 2023) <cit.> formulated the task as the production of an explanation mask indicating the parts of the video that influenced the most the estimates of a video summarization network about the frames' importance. Then, they utilized a state-of-the-art network architecture (CA-SUM <cit.>) and two datasets for video summarization (SumMe <cit.> and TVSum <cit.>), and evaluated the performance of various attention-based explanation signals by investigating the network's input-output relationship (according to different input replacement functions), and using a set of tailored evaluation measures. Following a different approach, Huang et al. (2023) <cit.> described a method for explainable video summarization that leverages ideas from Bayesian probability and causation modeling. A form of explanation about the outputs of this method is provided through causality graphs that show relations between input data, output importance scores, summarization criteria (e.g., representativeness, interestingness) and applied perturbations. Differently to most of the above discussed works that deal with the explanation of network architectures trained for various video analysis tasks (e.g., classification, action and activity recognition, anomaly detection), in this work, we focus on networks for video summarization. Contrary to the work of <cit.>, our framework produces visual-based explanations indicating the parts of the video (temporal video fragments and visual objects) that influenced the most the decisions of the summarizer, rather than providing causality graphs that need interpretation by a human expert. Moreover, the performance of our framework is assessed through a set of quantitative and qualitative evaluations. As stated in the introduction, our work builds on our previous efforts for explaining video summarization <cit.> and extends them by: i) examining the use of a model-agnostic approach for producing fragment-level explanations (rather than requiring access to the internal layers and weights of the summarization network), ii) proposing a novel methodology for producing object-level explanations (thus providing more clues about the content of the video that is more important for the summarizer), and iii) combining the different explanation approaches under an integrated framework that offers a multi-granular, and thus more comprehensive explanation for the output of the video summarization process. § PROPOSED APPROACH A high-level overview of the developed framework for multi-granular explainable video summarization is given in Fig. <ref>. Given an input video, a summarizer and the produced video summary (formed by the three top-scoring video fragments by the summarizer), our framework produces three different types of explanations: i) a fragment-level explanation that indicates the temporal video fragments that influenced the most the decisions of the summarizer, ii) an object-level explanation that highlights the most influential visual objects within the aforementioned fragments, and iii) another object-level explanation that points out the visual objects within the fragments that have been selected for inclusion in the summary, that influenced the most this selection. In the core of this framework there is an XAI method that is responsible for producing the explanation. More details about the processing steps and the employed XAI method for producing each type of explanation, are provided in the following sections. §.§ Fragment-level Explanation The processing pipeline for producing fragment-level explanations is depicted in Fig. <ref>. As can be seen, the input video needs to be temporally fragmented into consecutive and non-overlapping fragments. To perform this process, we employ a pre-trained model of the TransNetV2 method for shot segmentation from <cit.>. This method relies on a 3D-CNN network architecture with two prediction heads; one predicting the middle frame of a shot transition and another one predicting all transition frames and used during training to improve the network's understanding of what constitutes a shot transition and how long it is. The used model has been trained using synthetically-created data from the TRECVID IACC.3 dataset <cit.> and the ground-truth data of the ClipShots dataset <cit.>. If the number of video fragments is equal to one (thus, the input video is a single-shot user-generated video) or less than ten (thus, the selection of three fragments for building the summary would not lead to a significantly condensed synopsis of the video), we further fragment the input video using the method for sub-shot segmentation from <cit.>. This method segments a video into visually coherent parts that correspond to individual video capturing activities (e.g., camera pan and tilt, change in focal length and camera displacement) by extracting and evaluating the region-level spatio-temporal distribution of the optical flow over sequences of neighbouring video frames. The defined video fragments along with the input video, the summarizer and the produced video summary, are then given as input to the XAI method. This method can be either model-agnostic (i.e., it does not require any knowledge about the summarization model) or model-specific (i.e., it utilizes information from the internal layers of the model). In this work, we considered the LIME explanation method from <cit.> and the best-performing configuration of the attention-based explanation method from <cit.>, respectively. LIME <cit.> is a perturbation-based method that approximates the behavior of a model locally by generating a simpler, interpretable model. This method was designed for producing image-level explanations by masking out regions of the image; thus, we had to adapt it to operate over sequences of frames and produce fragment-level explanations. In particular, instead of masking out regions of a video frame during a perturbation, we mask out entire video fragments by replacing their frames with black frames. The perturbed version of the input video is fed to the summarizer, which then produces a new output (i.e., a new sequence of frame-level importance scores). This process is repeated M times and the binary masks of each perturbation (indicating the fragments of the video that were masked out) are fitted to the corresponding importance scores (computed as the average of the sequence of frame-level importance scores) using a linear regressor. Finally, the fragment-level explanation is produced by focusing on the top-3 scoring fragments (indicated by the assigned weights to the indices of the binary masks) by this simpler model. The attention-based method of <cit.> can be applied on network architectures for video summarization that estimate the frames' importance with the help of an attention mechanism, such as the ones from <cit.>. This method uses the computed attention weights in the main diagonal of the attention matrix for a given input video, and forms an explanation signal by averaging them at the fragment level. The values of this explanation signal indicate the influence of the video's fragments in the output of the summarizer, and the fragments related to the top-3 scoring ones are selected to create the fragment-level explanation. §.§ Object-level Explanation The processing pipeline for creating object-level explanations is shown in Fig. <ref>. The selected video fragments for creating such explanations can be either the most influential ones according to the fragment-level explanation, or the top-scoring ones by the summarizer, that were selected for inclusion in the video summary. The utilized XAI method in this case is LIME <cit.>, and the goal is to apply perturbations at the visual object level in order to identify the objects within the selected fragments, that influence the most the output of the summarizer. Once again, we use an adaptation of the LIME method, that takes into account the applied spatial perturbations in the visual content of a sequence of video frames (and not on a single frame). To make sure that a perturbation is applied on the same visual object(s) across the frames of a video fragment, we spatially segment these frames using a model of the Video K-Net method for video panoptic segmentation <cit.>, trained on the VIP-Seg dataset <cit.>. This method builds about the foundation of K-Net <cit.>, which unifies image segmentation through a collection of adaptable kernels. Video K-Net capitalizes on the kernels' ability to encode object appearances and contextual information, combining segmentation and tracking of both semantically meaningful categories and to individual instances of countable objects across a sequence of video frames. The top-scoring frame (by the summarizer) within a selected video fragment (by the fragment-level explanation or the summarizer) is picked as the keyframe. Once all the frames of this fragment have been spatially segmented by Video K-Net, the appearing visual objects in the selected keyframe are masked out across the entire video fragment through a series of perturbations that replace the associated pixels of the video frames (specified by the assigned object IDs from the Video K-Net method) with black pixels. The perturbed version of the input video after masking out a visual object in one of the selected video fragments is forwarded to the summarizer, which outputs a new sequence of frame-level importance scores. This process is repeated N times for a given video fragment and the binary masks of each perturbation are fitted to the corresponding importance scores (computed as the average of the importance scores of the frames within the selected fragment) using a linear regressor. Finally, the object-level explanation is formed by taking the top- and bottom-scoring visual objects (indicated by the assigned weights to the indices of the binary masks) by this simpler model, and highlighting the corresponding visual objects (using green and red coloured overlaying masks, respectively) in the keyframes of the selected video fragments. § EXPERIMENTS In this section we describe the utilized datasets and evaluation protocol for assessing the performance of the produced explanations. Following, we provide some implementation details and report the findings of the conducted quantitative and qualitative evaluations. §.§ Datasets and Evaluation Protocol In our experiments we employ the SumMe <cit.> and TVSum <cit.> datasets, which are the most widely used ones in the literature for video summarization <cit.>. SumMe is composed of 25 videos with diverse video contents (e.g., covering holidays, events and sports), captured from both first-person and third-person view. TVSum contains 50 videos from 10 categories of the TRECVid MED task. For evaluation we utilize the Discoverability+/- and Sanity Violation measures from <cit.>. The Rank Correlation measure was not taken into account, as we are interested in the capacity of explanations to spot the most and least influential fragments, rather than ranking the entire set of video fragments based on their influence to the summarization method. For completeness, in the following we describe each measure and the way it was computed in our evaluations. To measure the influence of a selected video fragment or visual object by an explanation method, we mask it out (using black frames or pixels, respectively) and compute the difference in the summarization model's output, as Δ E(X,X̂^k) = τ(y, y^k). In this formula, X is the set of original frame representations, X̂^k is the set of updated features of the frames belonging to the selected k^th video fragment (after the applied mask out process), y and y^k are the outputs of the summarization model for X and X̂^k, respectively, and τ is the Kendall's τ correlation coefficient <cit.>. Based on Δ E, we assess the performance of each explanation using the following evaluation measures: * Discoverability+ (Disc+) evaluates if the top-3 scoring fragments/objects by an explanation method have a significant influence to the model's output. For a given video, it is calculated by computing Δ E after perturbing (masking out) the top-1, top-2 and top-3 scoring fragments/objects in a one-by-one and sequential (batch) manner. The higher this measure is, the greater the ability of the explanation to spot the video fragments or visual objects with the highest influence to the summarization model. * Discoverability- (Disc-) evaluates if the bottom-3 scoring fragments/objects by an explanation method have small influence to the model's output. For a given video, it is calculated by computing Δ E after perturbing (masking out) the bottom-1, bottom-2 and bottom-3 scoring fragments/objects in a one-by-one and sequential (batch) manner. The lower this measure is, the greater the effectiveness of the explanation to spot the video fragments or visual objects with the lowest influence to the summarization model. * Sanity Violation (SV) quantifies the ability of explanations to correctly discriminate the most from the least influential video fragments or visual objects. It is calculated by counting the number of cases where the condition (Disc+ > Disc-) is violated, after perturbing (masking out) parts of the input corresponding to fragments/objects with the three highest and lowest explanation scores in a one-by-one and sequential (batch) manner, and then expressing the computed value as a fraction of the total number of perturbations. This measure ranges in [0, 1]; the closest its value is to zero, the greater the reliability of the explanation signal. §.§ Implementation Details Videos are downsampled to 2 fps and deep feature representations of the frames are obtained by taking the output of the pool5 layer of GoogleNet <cit.>, trained on ImageNet <cit.>. The number of applied perturbations M for producing fragment-level explanations was set equal to 20.000, in order to have robust and reliable results. The number of applied perturbations N for producing object-level explanations was set equal to 2.000, as there were only a few visual objects within the selected keyframes and thus the number of possible perturbations was also small. As stated previously, the number of video fragments for producing explanations (both at the fragment and the object level) was set equal to three. For video summarization, we use pre-trained models of the CA-SUM method <cit.> on the SumMe and TVSum datasets. All experiments were carried out on an NVIDIA RTX 4090 GPU card. The utilized models of CA-SUM and the code for reproducing the reported results, will be made publicly available upon acceptance at: <https://github.com/IDT-ITI/XAI-Video-Summaries> §.§ Quantitative Results The results about the performance of the examined fragment-level explanation methods on the videos of the SumMe and TVSum datasets, are presented in Tables <ref> and <ref>, respectively. In each case, the top part of the Table shows the computed Disc+/- and SV scores after taking into account videos that have at least one top- and one bottom-scoring fragment by the explanation method, while the bottom part shows the computed scores after taking into account videos that have at least three top- and three bottom-scoring fragments by the explanation method. As stated in Section <ref>, the top-scoring fragments are used for computing Disc+ and Disc+ Seq, the bottom-scoring fragments are employed for computing Disc- and Disc- Seq, while both top- and bottom-scoring fragments are utilized for computing SV and SV Seq. For the sake of space, in Tables <ref>-<ref> we show the top- and bottom-k scoring fragment (with k equal to 1, 2 or 3) in the same cell. The results in Tables <ref> and <ref> show that the attention-based method performs clearly better compared to LIME, in most evaluation settings. The produced fragment-level explanations by this method are more capable to spot the most influential video fragment, while its performance is comparable with that of the LIME-based explanation at spotting the second and third most influential ones; though the attention-based explanations are better at detecting the fragments with the lowest influence (see columns “Disc+” and “Disc-”). The competitiveness of the attention-based method is more pronounced when more than one video fragments are taken into account, as it performs constantly better than LIME in both datasets (see columns “Disc+ Seq” and “Disc- Seq”). Finally, the produced fragment-level explanations by the aforementioned method are clearly more effective in discriminating the most from the least influential fragments of the video, as indicated by the significantly lower sanity violation scores in all settings (see columns “SV” and “SV Seq”). To assess the competency of the examined fragment-level explanation methods to correctly rank the most and least influential video fragments on the summarization model's output, in Fig. <ref> and <ref> we illustrate the computed Disc+ and Disc- scores for the videos of the SumMe and TVSum dataset, after masking out the three top- and bottom-scoring fragments, in a one-by-one and sequential manner. The presented scores show, that both methods are able to correctly rank the most influential fragments, as in most cases they lead to Disc+ scores that are gradually increasing when moving from the top-1 to the top-3 scoring fragment (as expected). More specifically, the attention-based method seems to be more appropriate at spotting the fragment with the highest influence to the summarization model (as indicated by the significantly lower Disc+ value for the top-1), while its performance is comparable with the one of LIME when finding the second and third most influential fragment. Moreover, the effectiveness of both methods to rank the most influential fragments is also illustrated by the observed values when masking out these fragments in a sequential manner. The inclusion of additional fragments in the explanation leads to lower Disc+ values (as expected), while the impact of the second and third top-scoring fragments is quantifiable but clearly smaller than the one of the top-1 scoring fragment. Overall, the attention-based explanation method performs better on both datasets, as it leads to significantly lower Disc+ scores compared to LIME (especially on the SumMe dataset). With respect to video fragments that influence the least the output of the summarization model, both methods seem to be less effective at spotting them in the used videos, as the obtained Disc- scores show that, in most cases, the bottom-scoring fragment has a higher impact on the summarization model, compared to the impact of the second and third bottom-scoring fragment (contrary to the expected behavior). Nevertheless, this weakness is less observed for the attention-based method, as the produced explanations lead to similar Disc- scores for the bottom-1, bottom-2 and bottom-3 fragment on both datasets, contrary to the LIME method, which indicates a fragment with clearly higher impact than the other two, as the least influential one (especially on the SumMe dataset). The competitiveness of the attention-based method is also highlighted by the generally higher Disc- scores compared to the ones of the LIME method, after masking out more than one of the least influential video fragments (i.e., sequentially) on the videos of both datasets. “Disc- Seq” scores around 0.9 even after masking out three fragments of the video, point out the competency of this method to spot fragments with minor influence on the output of the summarization model. The performance of the developed method for object-level explanation was initially evaluated using video fragments that were found as the most influential ones by the considered fragment-level explanation methods. The results of our experimental evaluations on the videos of the SumMe and TVSum datasets are presented in Tables <ref> and <ref>, respectively. These results show that, the object-level explanations for selected video fragments by the two different explanation methods exhibit comparable performance. In general, the LIME-based fragments allow the object-level explanation method to be a bit more effective when spotting the most influential visual objects, while the attention-based fragments lead to better performance when spotting the visual objects with the least influence on the model's output. The comparable capacity of the fragment-level explanation methods is also shown from the mostly similar sanity violation scores. A difference is observed when the applied perturbations affect more than one visual objects, where the produced object-level explanations using the attention-based fragments are associated with clearly lower sanity violation scores. Therefore, a choice between the fragment-level explanation methods could be made based on the level of details in the obtained object-level explanation. If highlighting a single visual object is sufficient, then using the LIME-based fragments could be a good option; however, if the explanation needs to include more visual objects, then the attention-based fragments would be more appropriate for use. In any case, the LIME-based fragment-level explanation method is the only option when there are no details about the video summarization model and thus, the explanation of the model's output must be done through a fully model-agnostic processing pipeline. The performance of the developed object-level explanation method on the videos of the SumMe and TVSum datasets when using the three top-scoring fragments by the summarization method, is reported in Tables <ref> and <ref>, respectively. As a note, in this case, the Disc+/- evaluation measures are computed by taking into account only the importance scores of the frames within the selected fragments. A pair-wise comparison of the Disc+ and Disc- scores shows that our method distinguishes the most from the least influential object in most cases, a fact that is also documented by the obtained sanity violation scores. Moreover, it is able to spot objects that have indeed a very small impact on the output of the summarization process, as demonstrated by the singificantly high Disc- scores. Finally, a cross-dataset comparison shows that our method is more effective on the videos of the TVSum dataset, as it exhibits constantly lower sanity violation scores for both evaluation settings (one-by-one and sequential). §.§ Qualitative Results Our qualitative analysis is based on the produced explanations for two videos of the SumMe and TVSum datasets. The top part of Figs. <ref> and <ref> provides a keyframe-based representation of the visual content of the original and the summarized version of the video, while the bottom part shows the produced explanations by the proposed framework. The green- and red-coloured regions in the frames of the object-level explanations, indicate the most and least influential visual objects, respectively. To avoid confusion, these regions are demarcated also in segmentation masks, right below. In the example video of Fig. <ref>, which is titled “Kids playing in leaves”, the generated video summary contains parts of the video showing the kids playing with the leaves near a truck. The produced fragment-level explanation from the utilized attention-based method shows that the summarization model paid attention on instances of the kids playing with the leaves (second and third fragment), and the part of the scene where the event is mainly taking place (second fragment). The obtained object-level explanation using the selected fragments by the attention-based explanation method demonstrates that the summarizer concentrates on the truck (second fragment) and the kids (third fragment) - while it pays less attention on the house (first fragment) and the yard (second and third fragment) - thus further explaining why these parts of the video were selected for inclusion in the summary and why other parts of the video (showing the yard right in front of the house, the black car in the parking and the lady) were not. Finally, the produced object-level explanation using the selected fragments by the summarizer seems to partially overlap with the aforementioned one, as it shows that the truck and the house were again the most and least important visual objects for the summarizer (second and third fragment); though, it indicates that the summarizer paid attention to the yard where the kids are playing at. In the example video of Fig. <ref>, which is titled “Smage Bros. Motorcycle Stunt Show”, the created video summary shows the riders of the motorcycles and one of them being interviewed. The obtained fragment-level explanation from the employed method indicates that the summarizer concentrates on the riders (second and third fragment) and the interview (first fragment). Further insights are given by the object-level explanation of the aforementioned fragments, which demonstrates that the motorcycles (second and third fragment) and the participants in the interview (first fragment) were the most influential visual objects. Similar remarks can be made by observing the produced object-level explanation using the selected fragments from the summarizer (see the first and second fragment). These findings explain why the summarizer selected these parts of the video for inclusion in the summary and why other parts (showing the logo of the TV-show, distant views of the scene and close-ups of the riders) where found as less appropriate. These paradigms show that the produced multi-granular explanations by the proposed framework could allow the user to get insights about the focus of the summarization model, and thus, assist the explanation of the summarization outcome. § CONCLUSIONS AND FUTURE STEPS In this paper, we presented a framework for explaining video summarization results through visual-based explanations that are associated with different levels of data granularity. In particular, our framework can provide fragment-level explanations that show the temporal fragments of the video that influenced the most the decisions of the summarizer, using either a model-specific (attention-based) or model-agnostic (LIME-based) explanation method. Moreover, it can produce object-level explanations that highlight the visual objects with the highest influence to the summarizer, taking into account the video fragments that were selected either by the fragment-level explanation method or the summarizer. The performance of the produced explanations was evaluated using a state-of-the-art method (CA-SUM) and two datasets (SumMe and TVSum) for video summarization. The conducted quantitative evaluations showed the effectiveness of our explanation framework to spot the parts of the video (fragments or visual objects) with the highest and lowest influence on the summarizer, while our qualitative analysis demonstrated its capacity to produce a set of multi-granular and informative explanations for the results of the video summarization process. In terms of future steps, we plan to test the performance of our framework using additional state-of-the-art methods for video summarization. Moreover, we aim to leverage advanced vision-language models (e.g., CLIP <cit.> and BLIP-2 <cit.>) and extend our framework to provide a textual description of the produced visual-based explanations, thus making it more user-friendly for media professionals. § ACKNOWLEDGMENTS This work was supported by the EU Horizon 2020 programme under grant agreement H2020-951911 AI4Media. 10 10.1007/978-3-031-53302-0_21 Evlampios Apostolidis, Konstantinos Apostolidis, and Vasileios Mezaris. Facilitating the production of well-tailored video summaries for sharing on social media. In Stevan Rudinac, Alan Hanjalic, Cynthia Liem, Marcel Worring, Bjorn Dor Jonsson, Bei Liu, and Yoko Yamakata, editors, MultiMedia Modeling, pages 271–278, Cham, 2024. Springer Nature Switzerland. apostolidis_chapter Evlampios Apostolidis, Georgios Balaouras, Ioannis Patras, and Vasileios Mezaris. Explainable video summarization for advancing media content production. In D.B.A. Mehdi Khosrow-Pour, editor, Encyclopedia of Information Science and Technology, Sixth Edition., page 1–24. IGI Global, Hershey, PA, 2025. 9594911 Evlampios Apostolidis, Eleni Adamantidou, Alexandros I. Metsai, Vasileios Mezaris, and Ioannis Patras. Video summarization using deep neural networks: A survey. Proceedings of the IEEE, 109(11):1838–1863, 2021. Bargal_2018_CVPR Sarah Adel Bargal, Andrea Zunino, Donghyun Kim, Jianming Zhang, Vittorio Murino, and Stan Sclaroff. Excitation backprop for rnns. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2018. 10.1007/978-3-030-69541-5_25 Joonatan Mänttäri, Sofia Broomé, John Folkesson, and Hedvig Kjellström. Interpreting video features: A comparison of 3d convolutional networks and convolutional lstm networks. In Hiroshi Ishikawa, Cheng-Lin Liu, Tomas Pajdla, and Jianbo Shi, editors, Asian Conference on Computer Vision (ACCV) 2020, pages 411–426, Cham, 2020. Springer International Publishing. Li2021TowardsVE Zhenqiang Li, Weimin Wang, Zuoyue Li, Yifei Huang, and Yoichi Sato. Towards visually explaining video understanding networks with perturbation. 2021 IEEE Winter Conf. on Applications of Computer Vision (WACV), pages 1119–1128, 2021. 8803153 Alexandros Stergiou, Georgios Kapidis, Grigorios Kalliatakis, Christos Chrysoulas, Remco Veltkamp, and Ronald Poppe. Saliency tubes: Visual explanations for spatio-temporal convolutions. In 2019 IEEE Int. Conf. on Image Processing (ICIP), pages 1830–1834, 2019. 10.1145/3343031.3351040 Tao Zhuo, Zhiyong Cheng, Peng Zhang, Yongkang Wong, and Mohan Kankanhalli. Explainable video action reasoning via prior knowledge and state transitions. In Proc. of the 27th ACM Int. Conf. on Multimedia, MM '19, page 521–529, New York, NY, USA, 2019. Association for Computing Machinery. HAN2022212 Yamin Han, Tao Zhuo, Peng Zhang, Wei Huang, Yufei Zha, Yanning Zhang, and Mohan Kankanhalli. One-shot video graph generation for explainable action reasoning. Neurocomputing, 488:212–225, 2022. Aakur2018AnIE Sathyanarayanan N. Aakur, Fillipe D. M. de Souza, and Sudeep Sarkar. An inherently explainable model for video activity interpretation. In The Workshops of the 32nd AAAI Conf. on Artificial Intelligence, 2018. Roy2019ExplainableAR Chiradeep Roy, Mahesh Shanbhag, Mahsan Nourani, Tahrima Rahman, Samia Kabir, Vibhav Gogate, Nicholas Ruozzi, and Eric D. Ragan. Explainable activity recognition in videos. In ACM Intelligent User Interfaces (IUI) Workshops, 2019. 278c9656de614a479c93c6dead189ff4 Chongke Wu, Sicong Shao, Pratik Satam, and Salim Hariri. An explainable and efficient deep learning framework for video anomaly detection. Cluster Computing, 25(4):2715–2737, August 2022. 10205367 A. Singh, M. J. Jones, and E. G. Learned-Miller. Eval: Explainable video anomaly localization. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18717–18726, Los Alamitos, CA, USA, jun 2023. IEEE Computer Society. 9468958 Shunan Guo, Zhuochen Jin, Qing Chen, David Gotz, Hongyuan Zha, and Nan Cao. Interpretable anomaly detection in event sequences via sequence matching and visual comparison. IEEE Transactions on Visualization and Computer Graphics, 28(12):4531–4545, 2022. 9706981 S. Szymanowicz, J. Charles, and R. Cipolla. Discrete neural representations for explainable anomaly detection. In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 1506–1514, Los Alamitos, CA, USA, jan 2022. IEEE Computer Society. 8237653 R. Hinami, T. Mei, and S. Satoh. Joint detection and recounting of abnormal events by learning deep generic knowledge. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 3639–3647, Los Alamitos, CA, USA, oct 2017. IEEE Computer Society. 10019643 Evlampios Apostolidis, Georgios Balaouras, Vasileios Mezaris, and Ioannis Patras. Explaining video summarization based on the focus of attention. In 2022 IEEE Int. Symposium on Multimedia (ISM), pages 146–150, 2022. 10.1145/3607540.3617138 Evlampios Apostolidis, Vasileios Mezaris, and Ioannis Patras. A study on the use of attention for explaining video summarization. In Proc. of the 2nd Workshop on User-Centric Narrative Summarization of Long Videos, NarSUM '23, page 41–49, New York, NY, USA, 2023. Association for Computing Machinery. 10208771 J. Huang, C. Yang, P. Chen, M. Chen, and M. Worring. Causalainer: Causal explainer for automatic video summarization. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 2630–2636, Los Alamitos, CA, USA, jun 2023. IEEE Computer Society. ribeiro2016should Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. " why should i trust you?" explaining the predictions of any classifier. In Proc. of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016. li2022videoknet Xiangtai Li, Wenwei Zhang, Jiangmiao Pang, Kai Chen, Guangliang Cheng, Yunhai Tong, and Chen Change Loy. Video k-net: A simple, strong, and unified baseline for video segmentation. In CVPR, 2022. 10.1145/3512527.3531404 Evlampios Apostolidis, Georgios Balaouras, Vasileios Mezaris, and Ioannis Patras. Summarizing videos using concentrated attention and considering the uniqueness and diversity of the video frames. In Proc. of the 2022 Int. Conf. on Multimedia Retrieval, ICMR '22, page 407–415, New York, NY, USA, 2022. Association for Computing Machinery. 10.1007/978-3-319-10584-0_33 Michael Gygli, Helmut Grabner, Hayko Riemenschneider, and Luc Van Gool. Creating Summaries from User Videos. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Europ. Conf. on Computer Vision (ECCV) 2014, pages 505–520, Cham, 2014. Springer International Publishing. 7299154 Yale Song, Jordi Vallmitjana, Amanda Stent, and Alejandro Jaimes. TVSum: Summarizing web videos using titles. In 2015 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pages 5179–5187, June 2015. Papoutsakis2019UnsupervisedAE Konstantinos E. Papoutsakis and Antonis Argyros. Unsupervised and explainable assessment of video similarity. In British Machine Vision Conference, 2019. 9915576 Nikolaos Gkalelis, Dimitrios Daskalakis, and Vasileios Mezaris. Vigat: Bottom-up event recognition and explanation in video using factorized graph attention network. IEEE Access, 10:108797–108816, 2022. YU2021107791 Hongyuan Yu, Yan Huang, Lihong Pi, Chengquan Zhang, Xuan Li, and Liang Wang. End-to-end video text detection with online tracking. Pattern Recognition, 113:107791, 2021. BAI2021108102 Xiao Bai, Xiang Wang, Xianglong Liu, Qiang Liu, Jingkuan Song, Nicu Sebe, and Been Kim. Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments. Pattern Recognition, 120:108102, 2021. 10.1007/978-3-031-25085-9_23 Ioanna Gkartzonika, Nikolaos Gkalelis, and Vasileios Mezaris. Learning visual explanations for dcnn-based image classifiers using an attention mechanism. In Leonid Karlinsky, Tomer Michaeli, and Ko Nishino, editors, Computer Vision – ECCV 2022 Workshops, pages 396–411, Cham, 2023. Springer Nature Switzerland. ntrougkas2024ttame Mariano V. Ntrougkas, Nikolaos Gkalelis, and Vasileios Mezaris. T-tame: Trainable attention mechanism for explaining convolutional networks and vision transformers. 2024. 10.1145/3529755 Julia El Zini and Mariette Awad. On the explainability of natural language processing deep models. ACM Comput. Surv., 55(5), dec 2022. 10.1007/978-3-319-24574-4_28 Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Nassir Navab, Joachim Hornegger, William M. Wells, and Alejandro F. Frangi, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241, Cham, 2015. Springer International Publishing. soucek2020transnetv2 Tomáš Souček and Jakub Lokoč. Transnet v2: An effective deep network architecture for fast shot transition detection. arXiv preprint arXiv:2008.04838, 2020. DBLP:conf/trecvid/AwadBFJDMSGJKQE17 George Awad, Asad A. Butt, Jonathan G. Fiscus, David Joy, Andrew Delgado, Martial Michel, Alan F. Smeaton, Yvette Graham, Gareth J. F. Jones, Wessel Kraaij, Georges Quénot, Maria Eskevich, Roeland Ordelman, and Benoit Huet. TRECVID 2017: Evaluating ad-hoc and instance video search, events detection, video captioning and hyperlinking. In 2017 TREC Video Retrieval Evaluation, TRECVID 2017, Gaithersburg, MD, USA, Nov. 13-15, 2017. National Institute of Standards and Technology (NIST), 2017. 10.1007/978-3-030-20887-5_36 Shitao Tang, Litong Feng, Zhanghui Kuang, Yimin Chen, and Wei Zhang. Fast video shot transition localization with deep structured models. In C. V. Jawahar, Hongdong Li, Greg Mori, and Konrad Schindler, editors, Asian Conf. on Computer Vision (ACCV) 2018, pages 577–592, Cham, 2019. Springer International Publishing. apostolidis2018motion Konstantinos Apostolidis, Evlampios Apostolidis, and Vasileios Mezaris. A motion-driven approach for fine-grained temporal segmentation of user-generated videos. In 24th Int. Conf. on MultiMedia Modeling, MMM 2018, Bangkok, Thailand, February 5-7, 2018, Proceedings, Part I 24, pages 29–41. Springer, 2018. 10.1007/978-3-030-21074-8_4 Jiri Fajtl, Hajar Sadeghi Sokeh, Vasileios Argyriou, Dorothy Monekosso, and Paolo Remagnino. Summarizing Videos with Attention. In Gustavo Carneiro and Shaodi You, editors, Asian Conf. on Computer Vision (ACCV) 2018 Workshops, pages 39–54, Cham, 2019. Springer International Publishing. LI2021107677 Ping Li, Qinghao Ye, Luming Zhang, Li Yuan, Xianghua Xu, and Ling Shao. Exploring global diverse attention via pairwise temporal relation for video summarization. Pattern Recognition, 111:107677, 2021. miao2022large Jiaxu Miao, Xiaohan Wang, Yu Wu, Wei Li, Xu Zhang, Yunchao Wei, and Yi Yang. Large-scale video panoptic segmentation in the wild: A benchmark. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2022. zhang2021k Wenwei Zhang, Jiangmiao Pang, Kai Chen, and Chen Change Loy. K-net: Towards unified image segmentation. NeurIPS, 2021. kendall1945treatment Maurice G Kendall. The treatment of ties in ranking problems. Biometrika, 33(3):239–251, 1945. 7298594 Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In 2015 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pages 1–9, June 2015. 5206848 Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 248–255, 2009. Radford2021LearningTV Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 2021. 10.5555/3618408.3619222 Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In Proceedings of the 40th International Conference on Machine Learning, ICML'23. JMLR.org, 2023.
http://arxiv.org/abs/2405.10214v1
20240516160420
A Design Trajectory Map of Human-AI Collaborative Reinforcement Learning Systems: Survey and Taxonomy
[ "Zhaoxing Li" ]
cs.HC
[ "cs.HC" ]
[ Jack A. Thorne May 20, 2024 ================== Driven by the algorithmic advancements in reinforcement learning and the increasing number of implementations of human-AI collaboration, Collaborative Reinforcement Learning (CRL) has been receiving growing attention. Despite this recent upsurge, this area is still rarely systematically studied. In this paper, we provide an extensive survey, investigating CRL methods based on both interactive reinforcement learning algorithms and human-AI collaborative frameworks that were proposed in the past decade. We elucidate and discuss via synergistic analysis methods both the growth of the field and the state-of-the-art; we conceptualise the existing frameworks from the perspectives of design patterns, collaborative levels, parties and capabilities, and review interactive methods and algorithmic models. Specifically, we create a new Human-AI CRL Design Trajectory Map, as a systematic modelling tool for the selection of existing CRL frameworks, as well as a method of designing new CRL systems, and finally of improving future CRL designs. Furthermore, we elaborate generic Human-AI CRL challenges, providing the research community with a guide towards novel research directions. The aim of this paper is to empower researchers with a systematic framework for the design of efficient and 'natural' human-AI collaborative methods, making it possible to work on maximised realisation of humans' and AI's potentials. § INTRODUCTION With the rapid development of Artificial Intelligence (AI) in recent years, the mainstream media holds two opposing views: AI will 'save the world' <cit.> or 'destroy' it <cit.>. AI is described as the 'saviour', to free humans from labour, while it is also described as the 'devil' who takes away workers’ jobs <cit.>. Regardless of one's point of view, AI is playing an increasingly significant part in the future world. Weak AI, strong AI, and super AI are three stages of AI development, as proposed by John Searle <cit.>. Due to the limitations of current technology, Searle believes that we are supposed to have been and still be in the 'weak AI' stage for a long time. That is, at this current stage, AI often performs much worse than humans in highly complex decision-making tasks that require considerations of morality and risk, but much better in tasks with well-specified feedback and large scale data. Therefore, the two extreme situations described by the media are still far from the current stage that we've achieved <cit.>. Exploring thus the way for humans and AI to better cooperate, with the goal of complementing each other's shortcomings, may provide the best way forward for the immediate future. A common classification of Artificial Intelligence algorithms is supervised learning, unsupervised learning, and reinforcement learning <cit.>. Problems involving decision-making generally lie in the field of reinforcement learning <cit.>, and how humans and AI agents can cooperate and complement each other's shortcomings is particularly important. While the interaction between humans and AI agents is an emerging research direction, the research on the interaction between humans and computers has a long history. The community has proposed several patterns of human-computer interaction. For example, in 1983, Hollnagel and Woods proposed the Cognitive Systems Engineering (CES) model <cit.>. In 1991, Schmidt et al. created a conceptual paradigm classifying human-computer collaboration into three levels: augmentative, integrative, and debative <cit.>. In 2009, Johnson et al. proposed a co-active design pattern in human-AI joint activity <cit.>. Over the last few years, collaborative or interactive reinforcement learning has become a new field within the machine learning regime. Some excellent recent survey studies on Collaborative Reinforcement Learning (Collaborative RL, or CRL) have emerged, demonstrating this new field's importance. They cover a wide range of issues including CRL in general such as <cit.>, and CRL applied in specific domains, such as safe RL <cit.>, inverse RL <cit.>, and explainable RL <cit.>. Other studies concentrate on specific design methodologies such as user feedback and testbeds of the environment <cit.>. When performing a brief search on Google Scholar with the keywords 'interactive AI', 'collaboration', 'reinforcement learning', and 'HCI' (Human-Computer Interaction) for the period from 2011 to 2022, we found that there are many surveys or literature reviews published. For example, only between January 2020 and October 2022, 5 surveys or literature reviews were published. Najar and Anis reviewed reinforcement learning based on human advice <cit.>. Gomez and Randy's work focused on human-centered reinforcement learning <cit.>. Puiutta and Erick presented a review of explainable reinforcement learning <cit.>. Arzate and Christian presented a survey on the design principles and open challenges of interactive reinforcement learning <cit.>. Suran and Shweta consecrated on collective intelligence <cit.>. It can be observed that this research direction is gaining increasing attention from the community. However, surveying collaboration between humans and AI agents is being overlooked, let alone identifying the probable future direction of the growth in this field. To bridge this gap, we thus aim to address the following research question: How may designers approach the construction process of human-AI collaborative reinforcement learning systems in an structured manner? To answer this research question, we summarise existing collaboration approaches and offer our own perspectives and proposals. We look at classic human-machine interaction strategies that have had a significant effect on the evolution of the Human-Computer Interaction (HCI) field. We intend to give scholars and industry practitioners with a design toolkit that combines archetypes and specific tools in a micro-view <cit.> (see Figure <ref>). Furthermore, our study introduces Human-AI Collaborative Reinforcement Learning Design Trajectory Map, (see Figure <ref>), a new categorisation approach and systematic modelling tool that seeks to suggest research objectives for the next generation of Human-AI Collaborative Design. Similar to how builders require the blueprint design as well as instructions on how to plan different functional parts and choose various types of materials for the house, the Design Trajectory Map provides researchers with a comprehensive review regarding the design patterns for Human-AI Collaborative Reinforcement Learning systems (see Section <ref>) and guidance on how to customise the characteristics of different components to meet their specific requirements (see Sections <ref> and <ref>), as well as how to customise the algorithmic models (see Sections <ref>) and the interactive methods (see Section <ref>). This study builds on our previous work <cit.> published at DIS '21: Designing Interactive Systems Conference 2021. In that work, we propose a Human-AI Design Model that designs a CRL model from three different perspectives: Human, AI agent, and Design pattern, which is a straightforward and effective method (See Figure <ref>). In subsequent research, we found that in order to build a CRL system from macro to micro, we lacked the considerations of collaborative levels, parties and capabilities, which are crucial for designing the functions and details of each functional party. Therefore, in this work, we add a new Section <ref>. In addition, based on the original study, We improved in depth and topics covered. we create a more innovative comprehensive framework and a more complete taxonomy, covering from design patterns to algorithms, as well as increasing the review with 55 publications, accounting for 41 percent new literature. As a result, the primary contributions of this survey are as follows: * First, we summarise the most significant Human-AI Collaborative Design Patterns, which might help academics and practitioners in the HCI field. * Second, we present the Collaborative Reinforcement Learning (CRL) Design Trajectory Map, a novel CRL Classification and Taxonomy as a systematic modelling tool, to assist researchers in selecting and improving new CRL designs. * Third, we take stock and summarise the most recent Collaborative Reinforcement Learning algorithms, analysing the state-of-the-art at the start of this new decade. * Fourth, as a roadmap to good Human-AI Collaboration, we identify several general CRL problems for future study in this field. § BACKGROUND Reinforcement Learning is derived from theories of animal learning and parameter disturbance adaptive control <cit.>. The intuition is: if an agent's actions result in positive rewards (reinforcing signals), this type of behavior will be reinforced, increasing the agent's inclination to repeat the behavior in future acts. The goal of Reinforcement Learning is to train the agent to find the optimal strategy for each discrete state while maximizing the expected discounted rewards <cit.>. Mathematically, a reinforcement learning process can be described as a Markov Decision Process (MDP), defined by the tuple ℳ = ⟨ 𝒮, 𝒜, 𝒯, ℛ, γ⟩, which is a cyclical process, where an agent takes action 𝒜 to change its state 𝒮 to obtain a reward ℛ from the process of interacting with the environment; γ is the discount factor; 𝒯: S × A ↦Pr[S] is the transition function; the expected long-term reward follows policy π, represented as the Q-function Q^π(s, a), which is computed as: Q^π(s, a) = 𝔼_π[ ∑_t=0^∞γ^t R_t | s_0 = s, a_0 = a ] Q^*(s, a)=max _π Q^π(s, a) is an optimal value function. Any optimal policy π ^* that maximises the expected reward for each state is the solution to the MDP. Reinforcement Learning differs from the other two types of Machine Learning: Supervised Learning and Unsupervised Learning. In Supervised Learning, a model learns the mapping relationship between input X and label y through a set of paired and labelled data, to solve Regression and Classification problems. In Unsupervised Learning, a model learns unlabelled data without any guidance, to solve Association and Clustering problems, for discovering underlying patterns of the data. In Reinforcement Learning, a model learns the mapping relationship between states and actions (non-predefined data), to solve Exploitation or Exploration problems. The mapping directs the model, or the agent, to make optimal decisions based on these states towards maximising cumulative rewards <cit.>. The learning process emphasises the interactions between the agent and the environment that gives `reward signals' during the agent's continual or exhaustive attempts of all the possible strategies to be adopted in a certain 'state', rather than directing the agent how to create the 'correct' action <cit.>. A 'reward signal' is usually a scalar signal and an assessment of the quality of the generated action by the agent. This way, the agent learns knowledge from the environment through discrete feedback of actions, which it uses to optimise the parameters that might lead to an optimal result. With minimal information given by the external environment, the agent must learn by its own interactions with the environment, frequently from the ground up. If the 'reward signal' r and the 'action' A were known, these corresponding representation-label data might be utilised to train a model using supervised learning, however, it is often impracticable to exhaust all conceivable actions in an environment and create the corresponding 'reward signals'. This is where Reinforcement Learning may help. Reinforcement learning often beats Supervised Learning in scenarios where the discrete action space is small,such as the game of Go or Atari <cit.>. Bellman proposed the mathematical theory of dynamic programming in 1955 <cit.>. The Bellman condition was considered to be the crucial theoretical foundation for reinforcement learning. Then, in 1957, Bellman proposed Markov Decision Processes (MDPs) <cit.>, which are now used in most reinforcement learning algorithms. After the 1960s, the concepts of reinforcement and reinforcement learning gradually appeared in the literature. In 1963, a system called STeLLA was developed, which allows trial and error learning through interactions with the environment <cit.>. Michie proposed an early reinforcement learning system called MENACE. In 1975, Holland proposed an adaptive system based on the selection principle in his book 'Adaptation in Natural and Artificial Systems' <cit.>. This is regarded as one of the most significant events in the evolution of reinforcement learning. The book also included genetic algorithms, which aided in the development of optimization algorithms. Rummery and Niranjan suggested SARSA, or state-action-reward-state-action, in 1994 on the basis of Q-learning. In terms of decision-making, SARSA is similar to Q-learning, however it differs in terms of the updating method. SARSA employs an on-policy method, whereas Q-learning employs an off-policy one. <cit.>. Thrun et al. introduced the Monte Carlo positioning method in 1999, which uses probability to solve the robot positioning problem <cit.>. Compared to traditional grid methods, it is more efficient and saves memory <cit.>. With the advancement of computer power and the advancement of deep learning, numerous approaches combining deep learning with reinforcement learning have lately been presented. In 2013, Mnih et al., from the Deep Mind team, proposed Deep Q-Learning (DQL) <cit.>. This approach employs Q-learning to discover the appropriate control rules after transferring data from high-dimensional sensory input to a convolutional neural network to extract features. This team's AlphaGo defeated the world Go champion with a score of 3:0 in 2017. In December of same year, the more advanced Alpha Zero achieved AlphaGo master by self-learning without the assistance of human knowledge in only 21 days, and exceeded all versions after 40 days <cit.>. Since then, Reinforcement Learning has made great progress <cit.>. In 2021, Chen et al. proposed a method that transforms reinforcement learning into a sequence modelling problem called the decision transformer model <cit.>. This has gradually been applied in fields such as games <cit.>, robotics <cit.>, computer vision <cit.>, natural language processing (NLP) <cit.>, and recommender systems <cit.>. For example, the Open AI[OpenAI website: www.openai.com] team created an interactive reinforcement learning method that used human feedback, to learn summarisation <cit.>. Another recent project proposed by this team, GPT-3, has also made revolutionary achievements in the field of NLP <cit.>. Due to its strong potential and firm theoretical foundation, Reinforcement Learning has recently been one of the research areas in AI technologies that attract the most attention <cit.>. However, it faces many challenges. Currently, on the one hand, Reinforcement Learning only works well when the environment is definite, i.e., the state of the environment is fully observable. In particular, there are defined rules in games like Go, and the action space is discrete and constrained. In other words, the agent needs a great degree of prior knowledge, to understand its state in a complex environment <cit.>. On the other hand, even if the agent has been given well-specified feedback, the inexplicability and incomprehensibility caused by the agent's unconscious is still inadequate for the agent to decide on the precise next action <cit.>. Furthermore, most applications of Reinforcement Learning to date have only been for playing games, such as chess and Atari. § METHODOLOGY AND SCOPE §.§ Literature Collection This review focuses on hitherto undiscovered areas of research on collaborations between humans and AI agents. We further refine our data pool, specifically narrowing the selection to the following selected target areas: HCI, Human-AI Collaboration, Reinforcement Learning, and Explainable AI, published in the recent decade - the time period between 2011 and 2022 from Google Scholar. In total, our search yield 237 articles using keywords including collaborative reinforcement learning, interactive reinforcement learning, human-computer interaction, and design patterns. They were published in journals and conferences, including top venues such as TOCHI[The website of TOCHI: https://dl.acm.org/journal/tochi], IJHCS[The website of IJHCS: http://dblp.uni-trier.de/db/journals/ijmms/], AAAI[The website of AAAI: www.aaai.org/], CHI[The website of CHI: chi2021.acm.org/], UbiComp[The website of UbiComp: www.ubicomp.org/], UIST[The website of UIST: uist.acm.org/], and IEEE[The website of IEEE:www.ieee.org/]. Following a manual review of the title and abstract of each article, we eliminated 103 articles as irrelevant, leaving 134 articles as the source of this survey. §.§ Human-AI Collaborative Reinforcement Learning Classification We use an inductive method to organise the literature we collected, and proposed our new classification method, inspired by the traditional human-machine interaction research. Previous work by Najar and Anis mainly targeted physical interaction between humans and machines from a human perspective <cit.>. In the early stage of computer and engineering, there was no involved concept of AI. So, there is only research on human-machine interactions. Cruz et al. proposed that the human-AI interaction is a kind of human-machine interaction in general <cit.>. In this paper, we have collected paradigms of human-machine interaction in the early stage, which also could apply to human-AI interaction. Therefore, in the following section, we will use the concept of human-AI interaction in a unified manner. In the work of Arzate and Chirstian, more attention was paid to the algorithmic model of the AI agent <cit.>. After reviewing the literature of human-machine interaction in traditional engineering, we found that the interaction between humans and AI corresponds to these design patterns. In particular, Schmidt's model <cit.> not only combines the interactive methods and algorithmic models, but also provides different design ideas, according to different human-AI collaborative levels. Based on the common characteristics of the classification of these literature, we derive the new Human-AI Collaborative Reinforcement Learning (CRL) Classification (see Table <ref>). §.§ Human-AI Collaborative Reinforcement Learning Taxonomy We incorporate past work in a novel way to create a new taxonomy. We draw on Schmidt's Machine Interaction Pattern <cit.>, Dafoe's Collaboration Parties Classification Model <cit.>, and Arzate's Algorithmic Classification Model Collaboration Method <cit.>, to generate a novel taxonomy method from coarse to fine granularity. Based on this approach and populate them with representative works from the literature, for a structured approach (see Table <ref>), we define five axes: include Design Patterns, Collaborative Levels and Parties, Collaborative Capabilities, Interactive Methods, and Algorithmic Models. These five axes are then used to create a taxonomy, as shown in Figure <ref>, which might be used as a systematic modelling tool for HCI researchers and practitioners to select and improve their new CRL designs. § HUMAN-AI COLLABORATIVE DESIGN PATTERNS Human-AI collaborative design patterns may be used to provide an efficient and repeatable approach for building human-AI collaborative systems <cit.>. Reliable design patterns might increase these systems' quality, reusability, and maintainability. In this article, we collect the most recognised design patterns in the literature of human-machine collaboration for academics and practitioners to populate the CRL Taxonomy as shown in Figure <ref>. In comparison to human-AI collaboration, human-machine collaboration has long been a source of concern for researchers. In the early stages of the development of human-machine interactions, domain experts believed that the collaboration between humans and machines was a physical, lower-level type of collaboration <cit.>. Specifically, machines were used by humans exclusively through physical contact, in the lack of feedback between machines and humans, which might be viewed as a kind of unidirectional interaction. After decades of development, several recognised human-machine collaboration design patterns have been created, which we summarise below. §.§ CSE Pattern Cognitive Systems Engineering (CSE), coined by Hollnagel and Woods, acts at the level of cognitive functions <cit.>. CSE is the first framework proposed to analyse the human-machine information exchange interaction. CSE is a framework for human-machine collaboration, where machines 'plan and explore' using the knowledge or information provided by humans. This engineering method suggests that human-machine collaboration occurs at a conscious level of communication. It is a perceptual mode in which the machine is employed as a sensory extension to assist with human activities. At this level, the major challenge is to identify appropriate interactive methods to optimise human information processing. However, CSE has been constrained in that it has only explored basic and low-level communications, leaving out more complicated problems and environments. §.§ Bosch and Bronkhorst's Pattern Bosch and Bronkhorst defined three levels of Human-AI collaboration: 1) unidirectional interaction, in which humans assist machines or machines explain themselves to humans; 2) bi-directional interaction, and 3) collaboration between humans and machines <cit.>. The vast majority of the currently existing methods have only addressed the first level. This framework's contribution is to provide a viewpoint on the directions of collaboration between humans and machines. Furthermore, it constructs the direction based on the roles that humans and machines play in a task, with one being the subject of a task and the other aiding the opposing side. It is believed to help in the development of more efficient communication methods. For example, if it is a human-centred framework, more considerations should be given to how to transform a 'machine language' into interpretable information such that humans can better understand; whereas if it is a machine-centred framework, more considerations should be given to what human knowledge could improve machine efficiency. §.§ Coactive Design Pattern Johnson et al. proposed a Coactive design pattern in human-AI joint activities. They experimented with a collaboration system from the perspective of observability, predictability, and directability <cit.>. Observability concerns the ability of both robots (or AI agents) and humans to observe each other's pertinent aspects of status, as well as the knowledge of the team, tasks, and the environment. Predictability refers to the state that the actions of both robots (or AI agents) and humans can be predicted such that they may rely on each other's actions to perform their own actions. Directability refers to the ability of both robots (or AI agents) and humans to direct each other's behaviour in a complimentary manner. This framework is similar to the Bosch and Bronkhorst's framework in that it considers the direction of interactions and divides it into different levels. However, it is limited due to a lack of robustness and security considerations. §.§ Schmidt's Pattern Schmidt believes that collaboration should be tailored to diverse needs, fulfil different functions, and be carried out in a variety of ways depending on the circumstances. collaboration may be summarised as follows: 1) the augmentative level, in which one role in the partnership (human or AI agent) assists the other in performing tasks; 2) the integrative level, in which both sides of the team share information and assist each other in completing tasks together; and 3) the debative level, in which tasks are completed through debate and negotiation between humans and AI agents, especially when dealing with complex issues <cit.>. This framework considers not only the information exchange direction in the collaboration, but also different levels of collaboration, as well as the robustness, security, and potential ethical considerations. § COLLABORATIVE LEVELS AND PARTIES The patterns described above frame the modes of human-AI collaboration from different perspectives. They are also very comparable in terms of compartmentalising collaboration modes or methods, i.e., into single-direction assistance, bi-directional collaboration, and higher-stage fused collaboration. In this survey, we utilise a fusion viewpoint that combines interactive methods and design patterns based on Schmidt's collaboration pattern to classify the current collaborative reinforcement learning techniques. Schmidt’s model is divided into three levels: augmentative, integrative, and debative. We drew a pyramid model based on Schmidt's model (see Figure <ref>). We highlight significant research that have emerged at each level and the issues that should be examined in the first three sub-sections that follow. We also discuss the characteristics, advantages, and disadvantages of these different methods, as well as how to develop new methods in the future. Apart from the classification of collaborative levels where humans and AI agents are both viewed as a whole, we also discuss collaboration from a micro perspective based on the framework proposed by Dafoe et al., where diverse constellations of humans and AI agents are discussed, which we refer to as “collaborative parties” in the final sub-section. §.§ Augmentative Level collaboration collaboration at the Augmentative Level entails one partner compensating for the shortcomings of the other <cit.>. AI has showed considerable promise in large-scale data processing with well-defined rules, as well as in natural-perception fields such as digital image recognition <cit.>, natural language processing <cit.>, among others. Nevertheless, in a complex ambiguous environment, AI performance lags considerably below that of humans. The Augmentative Level approaches offered by the community are mostly divided into two types. First, AI takes the lead in decision-making, while humans assist AI in enhancing processing efficiency. In this case, humans use prior knowledge to help the agents specify the state space and efficiently obtain rewards from the complex environment. Second, humans play the primary role in decision-making, with AI assisting in the process. In this case, the AI agents explain the tactics used to help humans make faster decisions in a simple environment. At the sub-level of humans helping AI agents improve efficiency, we will categorise how they communicate based on which parts of the algorithm humans' help can be injected. At the sub-level of AI agents helping humans, we mainly focus on how AI agents may inform humans about why they make particular decisions. §.§.§ Human->AI The most essential aspect in the role of humans supporting AI agents in decision-making has been how to efficiently deliver information to AI agents while reducing human weariness to a minimal. Up to this point, many human-AI collaborative reinforcement learning algorithms have been proposed, which may be categorised into explicit interaction modal, implicit interaction modal, and multi-modal methods based on different forms of interactions (detailed in Section <ref>). Finding a better way for humans to directly interact with AI agents is still an essential research priority. §.§.§ AI->Human In the task of AI agents assisting humans in decision-making, the most challenging problem lies in interpretability. Interpretability (or explainability) refers to the degree to which humans can understand the rationale underpinning machines' decision-making <cit.>. The interpretability of AI models refers to the clarification of the internal mechanism and the understanding of the results. The more interpretable the model, the easier it is for people to trust it <cit.>. Its significance may be seen in the following aspects: in the modelling phase, interpretability may assist developers in understanding the learning process, comparing alternative algorithms, optimising the procedure, and fine-tuning the models; in the operation phase, AI agents can explain the internal mechanism and interpret the model outcomes to the decision-maker (i.e., humans). Consider a decision-making recommendation model, before the model runs, multiple interpretable algorithms with their respective advantages can be provided to humans to choose from; and after the model is trained, the model must explain to humans why it recommended a specific solution given a specific context. Patterns underlying the above problems lie under the umbrella of eXplainable AI (XAI), which is commonly regarded as critical for the practical deployment of AI models. DARPA launched the XAI in 2016 <cit.>. The basic objective of XAI is to create machine learning models that, when combined with proper explanation techniques, will allow humans to better comprehend and eventually accept and trust the model's predictions. The literature generally proposes two types of explainability: 1) transparent models, which are embedded inside the operation of the AI algorithms, leading to explainability by design, applied to simpler AI algorithms with less accurate results; and 2) post-hoc models, which are performed after initial models have been trained. This type of methods is usually more efficient, but it is less reliable than transparent models <cit.>. At present, there are a few intrinsic interpretability Reinforcement Learning methods. Verma et al. introduced a Programmatically Interpretable Reinforcement Learning method (PIRL) <cit.>. This method is an upgrade of traditional Deep Reinforcement Learning (DRL). In DRL, due to the 'black-box' nature of neural networks, it is difficult to represent policies. To tackle the 'black-box' challenge, PIRL introduces an advanced human-readable programming language to define neural network policies. Shu et al. introduced a hierarchical and interpretable multi-task reinforcement learning framework, where a complex task is broken into several sub-tasks and then a hierarchical strategy is used to complete the learning with 'weak supervision' from humans. By breaking a task into sub-tasks and thus making a learned strategy traceable to them, and by explaining the relationship between different hierarchies of the sub-tasks, this method builds intrinsic interpretability. Compared with intrinsically intepretable Reinforcement Learning methods, post-hoc methods are simpler in algorithm structure and more efficient in the computing process. At present, many post-hoc methods have been proposed. For example, Liu et al. proposed an explainable DRL method based on linear model U-trees <cit.>. This is a stochastic gradient descent framework for explaining complex models by using linear model U-trees to fit Q-functions. There is also a Soft Decision Tree (SDT) method, which provides post-hoc explanations by extracting policies. Madumal et al. introduced an explainable method through a causal lens. In this framework, an AI agent learns to play StarCraft II, a large dynamic space strategy game <cit.>. To generate an explanation, they simplify the entire game states to four basic actions and nine basic states, and then use these basic causal factors to construct an explanation for why the AI agent chooses action A over action B. §.§ Integrative Level collaboration Integrative collaboration entails using the various advantages of both parties to complete a task. At this level, humans and AI agents are regarded as being interdependent. The main task is broken into several sub-tasks, and humans and AI agents can perform just those that they are skilled at <cit.>. At the integrative level, humans and AI agents play equal roles in the system. Information exchange at this level is generally referred to as 'communication' in the literature <cit.>. In the following sub-sections, first, we summarise the communication methods in this cooperative pattern. Then, we discuss how to make the communicating parties trust each other. On this basis, the system needs resilience to enhance its robustness in order to better deal with the complex conditions in the real world. §.§.§ Communication A grand challenge of collaborative reinforcement learning is how humans and AI agents communicate with each other. Only when communication is seamless can they make decisions on the next actions following each other’s feedback. Liang et al. proposed an implicit human-AI collaboration framework based on Gricean conversational theory <cit.> to play the game Hanabi. The AI agent must cooperate with the human to win the game. In this framework, the AI agent tries to understand the implied meaning of human's natural language suggestions in a dialogue box <cit.>. Cordona-Rivera and Young proposed an AI Planning-based Gameplay Discourse Generation framework to achieve communication between human players and the game <cit.>. Pablo and Markus proposed an approach of Human-AI collaboration by planning and recognition of the plan <cit.>. Johnson et al. proposed a testbed for joint activities. The unique feature of this testbed is that it can be applied not only in interactive experiments for multiple agents but also in interactive experiments between humans and agents <cit.>. A series of works were carried out on this testbed to study the collaboration of humans and agents in a team. For example, Matthew et al. introduced the relationship between the interdependence and autonomy in a human-AI collaboration system <cit.>. §.§.§ Trust Based on the established communications, how to make the partners trust each other to complete the task is also crucial. Although the community has not yet proposed a clear definition of trust between humans and AI agents, it is generally regarded as a psychological state <cit.>. Johnson et al. proposed a Coactive Design framework for human-AI joint activities. In their framework, the authors proposed a collaboration system following the perspective of observability, predictability, and directability <cit.>. These components are critical for humans and AI agents to collaborate in a trustworthy manner. §.§.§ Resilience Resilience is another essential feature in human-AI collaboration. On the premise of communication and mutual trust, in complex problems, with possible delays and information noise, how to establish a resilient mechanism to make the system more robust is crucial. An effective human-machine collaboration mechanism should be able to diagnose a problem quickly and provide remedial explanations after the problem occurs so that the system can get back on course <cit.>. Zieba et al. proposed a mechanism to measure the resilience of human-machine systems, that is, the ability to anticipate, avoid, and recover from accidents to a normal state <cit.>. This is instructive to design a cooperative system, as it is necessary to consider how the system responds to emergencies and thus recovers quickly. §.§ Debative Level collaboration Debative models come into play when humans and AI agents hold different opinions on decision-making in a task, and they debate to find the optimum solution based on their differing knowledge and understandings. Models are often required to meet the following requirements. First, humans and AI agents share a unified goal, and achieving that goal is the primary task. For both parties, a debate without a unified goal is meaningless. Second, both parties have strong justifications for their decisions and have insights into a problem based on their respective cognitive models. Third, both parties can effectively communicate and explain their decisions to each other. Communication and interpretability are the premises of the debate. Fourth, there are clear evaluation criteria to measure the outcome from a debate to ensure an optimal result. Fifth, both parties can learn and adjust their own knowledge after a debate to achieve better results in the future <cit.>. As knowledge-based decisions are fragile and controversial, it is necessary to debate the results <cit.>. In a complex and uncertain environment, a full debate will better demonstrate the advantages and disadvantages of different decisions. collaboration at the level of debate requires that both humans and AI agents have sufficiently high professionalism in a specific complex domain. Reinforcement learning algorithms based on this level are scarcely studied in the literature, but we expect that as the field progresses, this form of collaboration will attract more attention. Geoffrey et al. introduced a framework that enables two agents to debate with each other, with a human judge deciding who to trust in the end <cit.>. Although it has not yet been applied to the debate between humans and agents, this framework meets the requirements outlined above. In their experiment, the two agents attempted to persuade human judges to believe their judgments on the MNIST data <cit.>. First, the goal of the two agents were unified. Second, the two agents have different judgments based on their own algorithmic perceptions. Third, both agents are able to generate simple explanations to persuade human judges. Fourth, human judges have intuitive knowledge to make accurate judgments. This experiment is enlightening for future research, especially in human-agent and multi-agent debate collaboration. §.§ Collaborative Parties The different levels of human-AI collaboration take a macro view of human and AI, looking at them both as a whole. The collaboration of humans with AI, on the other hand, can be split down into different combinations of parties from a micro perspective or in considerations of practical scenarios. For example, future scenarios could include interactions between a human and several AI-agents, interactions between human groups and AI-agent groups, or more diversified fusion of the two. Therefore, in this section, we will discuss the types of interactions between humans and AI agents from the micro perspective (AI agents-agents, human-AI agents, human-human, and more complex constellations). Dafoe et al. categorises cooperative roles into three categories: AI-agent, Human and Organisations. Collaborative types based on the number of roles involved into the collaboration into six types <cit.> (see Figure <ref>): * Human-Human collaboration: the classic human-to-human collaborative model; * Cooperative Tools: the AI agent is used to enhance collaboration, such as language translation. * Alignment and Safety: the AI agent acts like an assistant to help humans solve problems, such as the relationship between vehicles and humans in autonomous driving. * Human-AI-Human-AI collaboration: With the development of 5G technology and AI technology, large-scale human groups and AI groups may cooperate in the foreseeable future, such as the level 5 autonomous driving <cit.>. * The Planner Perspective: This approach is to strengthen the collaboration and infrastructure of the entire society from the planner perspective of social construction, rather than the collaboration between an individual and a single AI, e.g., social media and network communications. * Organisations and Society: Collaboration could have a more complicated structure, with multiple types of hierarchical collaboration or a complex internal structure. § COLLABORATIVE CAPABILITIES In the previous section, we discussed the different levels of human-AI collaboration from a macro perspective, where both human and AI agent are viewed as integral parts. However, in the real scenarios or from an micro perspective, there may be interactions between a human and several AI-agents, or interactions between human groups and AI groups. Therefore, in this section, we will discuss the types of human-AI interactions from the micro perspective (i.e., AI agents-AI agents, human-AI agents, human-human, and more complex constellations), as well as discuss what kinds of collaboration capabilities the agent require for groups interactions. Dafoe et al. divides collaboration capabilities of agents into 4 types: Understanding, Communication, Commitments, and Institutions <cit.>. §.§ Understanding In human-AI Collaboration, an AI agent's ability to understand the environment and predict the consequences of its actions is crucial for reaching mutually beneficial results. In game theory, there are many discussions about how important it is to understand multi-role collaboration. For example, in Nash equilibrium, each strategy is required to be the best response after fully understanding other's strategies <cit.>. Moreover, under the constraint of partial information, Bayesian Nash Equilibrium and Perfect Bayesian Equilibrium provide a solution for how multi-role collaboration should enable other participants to better understand each other's strategies <cit.>. In collaborative reinforcement learning, the most important type of understanding is learning the preference of the other AI agents, namely the values, goals, reward functions of the other parties. Humans understand how to better provide feedback or rewards to the AI agent in order to help it converge faster and function more efficiency. Some AI researchers attempted to directly learn the AI agent's behaviour. For example, Albrecht's study <cit.> summarised how humans observe the AI agent's behaviour in order to understand the agent. Inverse Reinforcement Learning. or IRL, is a type of research in which AI agents are oblivious or indifferent to humans <cit.>. This type of method requires humans to inject their prior knowledge <cit.> or control the AI agent to make a few first steps <cit.>. Besides, in a more complex decision-making environment, humans may consciously or unconsciously hide their opinions or ideas causing significant challenges for AI agents. There have been some studies on the application of human recursive mind-reading methods in negotiations to overcome this challenge <cit.>, and some studies have applied this method to the game Hanabi to improve the collaboration between humans and AI agents <cit.>. §.§ Communication Understanding and collaboration could be difficult to achieve without effective communication. AI agents may often get a better understanding of others' behaviour, intentions, and preferences by communicating directly with them rather than just observing and interacting with them on a regular basis. The finding of Pareto-optimal equilibrium may be made easier as a result of information exchange <cit.>. Common Ground A common ground is necessary for collaboration. The message sender and receiver should use the same communication protocol so that each may understand the meaning of the other's message. Many studies have been conducted on machine-to-machine communication problems, which are usually referred to as emergency communication <cit.>. However, there are few studies on how to establish common ground and effective communication between humans and AI agents. The establishment of a common ground is arguably the most difficult challenge <cit.>. Bandwidth and Latency The bandwidth of communication refers to the volume of data that may be transferred in a given duration of time <cit.>. Latency refers to the time it takes for a message to be transmitted and received <cit.>. How to enhance bandwidth and minimise latency in human-AI collaboration has long been studied, and some promising techniques have been proposed, including brain-computer interface technologies, which are designed to connect human brain directly via hardware to maximum bandwidth with the shortest possible latency <cit.>. §.§ Commitments The aforementioned capabilities of Understanding and Communication strive to overcome the difficulties in collaboration caused by inaccurate or inadequate information. Collaboration, even with abundant information, may still fail. Social scientists have identified "commitment issues", or the inability to make credible threats or promises, as a primary cause of collaboration failure. Prominent research even claims that the problem of commitment is the most significant impediment to rational AI agent collaboration <cit.>. A substantial volume of research has explored the commitment issues that affect collaboration <cit.>. Many different studies tried to build commitments between Human and AI. Some studies have attempted to develop a commitment contract method between humans and AI agents based on semantics <cit.>. §.§ Institutions Obtaining the requisite understanding, communication, and commitment for collaboration often necessitates the use of a social framework. In economics and politics, this social framework is typically referred to abstractly as institutions <cit.>. Decentralised Institutions In decentralised institutions, there is no single institutions centre, and each individual is connected to each other, and continuously encourages the construction of the structure through the interaction with each other. Additionally, many multi-agent system construction methods have been proposed to aid in multi-agent systems communication, planning, and decision-making <cit.>. Centralised Institutions Centralised institutions involve a centralised authority that can define the rules and limit other participants <cit.>. The multi-agent systems research community attempts to build a collaborative mechanism amongst agents using approaches based on centralised institutions <cit.>. Several studies investigated the use of centralised multi-agent systems in automatic auction systems <cit.>. The method of centralised multi-agent path-finding technique could be utilised in autonomous vehicle obstacle detection in the future <cit.>. § INTERACTIVE METHODS Traditional reinforcement learning methods require excessive training-time in complex environments, and their applications are often confined to scenarios with clear rules. An effective way to mitigate these limits is the use of different strengths of human and AI and complementing one other's inadequacies. This approach is known as Collaborative Reinforcement Learning (CRL). CRL employs human-in-the-loop training to improve the performance of algorithms or to help humans improve decision-making efficiency <cit.>. Recent CRL research has focused on developing AI that can communicate with humans in a more natural way <cit.>. There are two types of interactive methods: explicit and implicit. In an explicit method, humans provide the AI agents with clear numerical feedback, explicitly. This method is preferable for AI agents since it allow them to process the feedback more easily, but it is likely to cause human fatigue due to the ambiguity of numerical representations, resulting in inefficiency in a long-term training process. In an implicit method, humans give feedback to AI agents through natural interactions such as posture and gaze, as opposed to explicit methods, which provide clear numerical feedback. This method places more demands on the AI agent, but it may improve the fatigue resistance of human trainers, allowing for long-term and stable collaboration <cit.>. Based on these unsolved problems, in this section, we present human-AI interactions from the perspective of interactive methods. §.§ Explicit Interactive Methods Currently, most AI agents learn from human feedback via explicit interactive methods. Humans provide feedback directly to the AI agent via keyboard, slider bar, or mouse to provide clear alpha-numerical feedback <cit.>. For example, Thomaz and Breazeal proposed a method of sending feedback to the AI agent by using the mouse to click on the sliding bar <cit.>. Knox and Stone proposed the TAMER framework, which allows an AI agent to learn from MDP and human advice by having a human trainer click the mouse to indicate the desired actions <cit.>. These methods are more efficient than traditional reinforcement learning, and can achieve specific goals in complex environments with the assistance of humans. However, the reaction time of human trainers may cause delayed feedback, leaving the AI agent unsure of which actions the human feedback was aimed at, especially for AI agents with frequent actions. A common solution is to set a delay parameter to express the past time-steps. For example, Warnell et al. proposed a method to obtain the delay distributions of the human trainers to improve algorithm efficiency <cit.>. Knox and Stone provided another way for estimating the delay: using a probability density function <cit.>. Moreover, these methods may be unfavourable to non-professional human trainers, who need to spend a significant amount of time learning the user interface and the meaning of feedback represented by each operation. Simultaneously, this kind of interactions can easily make human trainers impatient. Human trainers can also provide explicit feedback to AI agents using hardware delivery methods <cit.>, where feedback is be generally converted into a numeric value directly via the hardware devices, such as keyboards. However, a more user-friendlier method is for the AI agent to learn implicit feedback from natural interactions with trainers. §.§ Implicit Interactive Methods Aside from receiving feedback directly from human trainers via explicit interactive methods, the AI agents can also learn via implicit interactive methods. Implicit interaction methods reduce the learning cost of human trainers, as they can directly participate in training the AI agents without specific learning. At the same time, a more natural way of interaction may reduce the fatigue of human trainers. Many implicit interactive methods have lately been proposed. For example, feedback can be based on natural language, facial expressions, emotions, gestures, and actions, as well as the incorporation of multiple natural interactive methods. In an ideal scenario, humans could train the AI agent in the same way that they interact with humans in the real world. Below, we summarise some of the most prominent implicit interactive methods. Gestural Feedback. Gestures are sometimes considered to be a form of unconscious human communication. It is also considered to be an effective way to complement other communication forms, and it is even more useful than other communication methods for users who are speech- or hearing-impaired. For example, Voyles and Khosla proposed a framework which can train robots by imitating human gesture <cit.>. Moon et al. introduced a method of using gestures to command the AI agent to learn to control a wheelchair <cit.>. These methods are very friendly to human trainers and do not require any particular training on their part. Facial Feedback. Li et al. trained a mapping model to map implicit emotions to various types of explicit feedback data. Facial expressions were marked with different types of feedback in advance, such as 1 for “happy” and 0 or -1 for “sadness” <cit.>. Based on this work, Gadanho introduced a facial feedback reinforcement learning method based on an emotion recognition system. The system can learn to decide when to change or reinforce its behaviour with Q-learning by identifying human emotions <cit.>. Arakawa et al. introduced the DQN-TAMER model, where an AI agent may obtain facial expressions via a camera, and then use the facial expression data to map different emotions as implicit rewards to improve learning efficiency <cit.>. Veeriah et al. proposed a method where the agent may analyse human facial features from camera images to gain additional rewards. As a result, the AI agent can quickly adapt to the user's facial changes in order to complete the task <cit.>. One of the limitations of this kind of methods is that human emotions cannot be identified merely based on facial expressions, and there may be a delay in converting machine recognition expressions into feedback. Natural Language Feedback. When compared to facial expression and gesture tracking feedback methods, natural language feedback makes it easier to convert the token vector of the sentence into quantitative feedback. Natural language feedback can be transformed and applied to several aspects of reinforcement learning, such as rewards, values, and policies. Goyal et al. introduced the LEARN (LanguagE-Action Reward Network) method, which is a reward shaping method <cit.>. In the state-action space of the task, if most of the reward signals are 0s, we call it the sparsity of rewards. Sparse rewards may cause the algorithm to converge slowly. AI agents need to interact with the environment several times and learn from a large number of samples to reach an optimal solution. One solution to this problem is to provide the AI agent with a bonus reward in addition to the reward function whenever the AI agent takes a right step toward the goal. This process is called reward shaping. Maclin and Shavlik proposed RATLE (Reinforcement and Advice, Consulting Learning Environment) <cit.>, where the AI agent can translate human natural language suggestions into feedback for the Q-value function to accelerate the learning process. Kuhlmann proposed a method based on transforming natural language suggestions into an algorithm-understandable formal language to optimise the learning policy <cit.>. In addition to the methods described above for transforming into different parts of the algorithm, natural language can also be used to directly guide the AI agent's learning policy. For example, Williams et al. proposed an object-oriented Markov Decision Process (MDP) framework which can map the natural language to rewards feedback <cit.>. §.§ Multi-modal Feedback The research above is focused on a single input interaction method. Multi-modal interactions, on the other hand, are more prevalent and efficient in day-to-day human-human interactions. Multi-mode communication has the following benefits. First, when a single-mode piece of information is disrupted by noise or occlusion, other modes can be used as information supplements. Second, when multi-modal interaction is available, it has the potential to improve the robustness and reliability of communication. Quek et al. introduced a framework for analysing language's mutual support and accompanying gestures <cit.>. Cruz et al. proposed a dynamic multi-modal audiovisual interaction framework that would allow humans to provide feedback using their voices and gestures <cit.>. Griffith et al. <cit.> introduced a multi-modal interaction method based on hand gestures and speech recognition system, which was restricted to operating geometric objects on maps. Weber et al. <cit.> developed a dynamic audiovisual integration method that allows humans to input information via natural language and gestures. In the above experiments, multi-mode interactions generally outperformed single-mode interactions. Most of the current multi-mode interactions are merely a combination of two modes, such as any two of voice, gesture, sound, and vision. One of the problems of the above multi-modal methods is their inability to combine various forms of human feedback. The ability of humans to directly interact with AI agents using multiple methods at the same time remains unexplored. In the future, these multi-mode interactive methods can be combined in more forms to develop effective human-AI collaboration for a wider range of scenarios. Some studies take into account the effect of human fatigue caused by increasing training time on the quantity and quality of feedback. As training duration grows, human trainers become exhausted, reducing the amount of feedback while simultaneously lowering the quality of the feedback <cit.>. Methods for encouraging human trainers to raise interaction excitement through gamification were proposed; such methods have been found to decrease weariness and effectively improve human trainers' efficiency <cit.>. § ALGORITHMIC MODELS In the previous section, we analysed how humans provide feedback to AI agents. In this section, we categorise algorithmic models based on how agents receive and process human feedback. §.§ Reward-based Methods Reward-based methods accelerate the learning process by adjusting the reward that the AI agent receives from the environment. Concretely, after the AI agent receives feedback from the environment, humans can scale up or down the rewards based on their knowledge, potentially accelerating the learning process <cit.>. Computationally, the reward from human H(s, a, s^'), is added to the reward from the environmental reward R(s, a, s^') to get the new reward R̅(s, a, s^'). R̅(s, a, s^')=R(s, a, s^')+H(s, a, s^'), Thomaz and Breazeal proposed a method for non-expert human trainers to influence the AI agent's next action by providing a positive or a negative numerical reward. If the agent received a negative feedback, it would attempt to reverse the previous action in order to get a higher score <cit.>. Knox and Stone first introduced the TAMER algorithm, which uses human demonstration as input to guide the AI to perform better <cit.>. Based on the TAMER method, Riku et al. introduced a framework which combines deep learning method and TAMER, named DQN-TAMER, where rewards were shaped by the human's numerical binary feedback and environments <cit.>. Additionally, Arakawa et al. investigated a facial expression function based on reward-shaped method, which is applied in a maze-like environment game <cit.>. The human trainers' facial expressions could provide feedback to the AI agents. The major shortcoming is that the recognition of the human facial expression is imprecise and intermittent. Rosenfeld et al. develops a heuristic function method, where the AI agent receive feedback generated by hand-engineered data from the human trainer <cit.>. The experiment's findings <cit.> indicate that heuristic functions may be a natural method for AI agents to learn from human trainers. The primary disadvantage of this approach is that it requires human trainers with extensive professional backgrounds and programming skills. It will be extremely hostile to non-professional users. Reward-based methods can efficiently expedite the learning process in an environment with sparse rewards, but there are certain drawbacks, listed below. The first problem is "credit allocation", which is especially problematic in a rapidly changing environment where humans may be too slow to provide timely feedback. Therefore, the method's limitation remains how to map human rewards to corresponding actions. The second problem is "reward hacking", where the AI agent may achieve the greatest rewards by using ways that humans would not expect <cit.>. §.§ Inverse Reward Design Methods The agent is constantly attempting to optimise the human-designed reward function. When designing the AI agent, human developers always set the reward function based on the experimental environment, but the AI agent always encounters a new environment. Using the original reward function designed by humans in a new environment may lead to poor convergence. Mindermann et al. presented an inverse reward function in response to this issue <cit.>. To obtain the true target, this method is based on the designed reward function and the trained MDP. This allows agents to effectively adapt to the new environment, eliminating the issue of reward hacking. More specifically, this method takes the designed reward function, the test environment model, and the MDP in the new environment as input. Then a Bayesian function maps the proxy rewards to the real rewards. The experiment in <cit.> demonstrates that the inverse reward method could successfully boost the AI agent's learning efficiency. §.§ Policy-based Methods Policy-based methods modify the learning policy of the AI agent action process to encourage the action to fit what the human trainers expect <cit.>. Human trainers may be aware of a large number of potential optimal actions A in a given state S, the probability of human providing feedback to the AI agent can be denoted as C, where 0 < C < 1. The difference between positive and negative human feedback can be expressed as Δ_s, a. The probability that humans give policy feedback Pr_c(a) in a given state S can be expressed as Pr_c(a)=C^Δ_s, a/C^Δ_s, a+(1-C)^Δ_s, a At present, the method that uses human critique for state and action pairs as input to shape agent policy is widely accepted. Griffith et al. proposed an optimal policy method based on human feedback, a Bayesian method that takes as input critiques for each state and action pair <cit.>. The experiments in <cit.> suggest that this policy-based method outperforms other reward-based methods. Krening and Feigh conducted an experiment in which they compared two different policy-based methods that could bring a better user experience <cit.>. The first one is critique feedback method proposed by Griffith <cit.>, and the second is their Newtonian action advice method <cit.>. The result is that the method of action advice is better and the time required is reduced. MacGlashan et al. proposed a Convergent Actor-Critic method, COACH (Corrective Advice Communicated by Humans). This framework allows non-experts to use numerical binary feedback to formulate policies through corrective suggestions <cit.>. Dilip et al. proposed a deep COACH method based on the original COACH, which uses raw pixels as input to train the AI agent's policy. The authors argued that the use of highly representative inputs facilitates the application of the algorithm in more complex environments <cit.>. When compared to reward-based methods, the advantage of policy-based methods is that they do not require specific feedback from humans to AI agents. Nevertheless, humans must determine which strategy may be the most effective assisting the AI agent. This may have higher requirements for the prior knowledge of human trainers. §.§ Value Function based Methods Value function based methods estimate future rewards to obtain the highest potential reward at the end of the task, by using human knowledge <cit.>. They combine the value representing human preference with the value obtained by the AI agent from the environment to promote the learning process. Matthew et al. proposed a method that combines human preference and agent value called Human-Agent Transfer (HAT) <cit.>. The algorithm generates a strategy based on recorded human trainer preferences, which it then applies to shape the Q-value function. This shaping process provides a stable reward for the state-action pair, in the Q-learning process. Brys et al. proposed a method that uses human demonstrations as input for a value named RLfD. This method generates a Gaussian function by human demonstration to guide the exploration process of the Q(λ) algorithm <cit.>. Despite the fact that value function based methods are likely to be an effective way for minimising human feedback, there are now just a few studies based on it. §.§ Exploration Process based Methods Reinforcement learning is a method in which an AI agent needs to continuously interact with the environment and complete tasks based on rewards. This means that the AI agent needs to perform actions that it has never tried before. This process is referred to as the exploration process. In exploration process based methods, humans can increase the efficiency by reducing AI agent errors and unnecessary attempts <cit.>. Exploration process based methods aim to minimise the action space by injecting prior human knowledge to guide the AI agent’s exploration in order to increase learning efficiency. Thomaz and Breazeal conducted an experiment in the game Sophie's Kitchen to evaluate human guidance that helps the AI agent minimise its action space in order to enhance learning efficiency <cit.>. The results suggest that employing human prior knowledge to limit low utility efforts is more efficient than using scalar reward functions <cit.>. Suay et al. developed an upgrading approach in which the user may help exploration by highlighting goal states in the environment <cit.>. Yu et al. proposed an approach, termed as action biasing, which leverages user feedback to stimulate the AI agent's exploration process. The sum of the agent and user value functions is employed as a value function, to incorporate human feedback into the AI agent's learning process <cit.>. These methods are considered effective, but they generally need to be trained by humans, and this training process requires a lot of professional knowledge and participation. In general, collaborative reinforcement learning has shown great potential in improving the efficiency of decision-making tasks. However, further research is needed to determine how to build the environment models in which humans interact with AI agents. These models should consider not only the effectiveness and efficiency of interactive methods, but also interpretability, accountability, and possible ethical issues in decision-making. Therefore, in the following sections, we refer to the literature on the pattern of human-machine relations in the engineering field, and propose guidance for future development of collaborative reinforcement learning methods. § DESIGN TRAJECTORY MAP Based on the previous CRL taxonomy, we propose a novel CRL Trajectory Design Map to guide researchers design CRL systems. When researchers start designing a human-AI collaborative reinforcement learning system, they could follow our CRL Trajectory Design Map (Figure <ref>) step by step. First, they start with selecting a collaborative pattern from a macro perspective in the Design Patterns (Section <ref>) category. Next, they choose different collaborative levels and a number of the participants in the Collaborative Levels and Parties (Section <ref>). After that, they choose the cooperative capabilities that every party should have in the Collaboration Capabilities (Section <ref>). Finally, they select suitable interactive methods and algorithmic models for the specific task requirement categories of Interactive Methods (Section <ref>) and Algorithmic Models (Section <ref>). Figure <ref> presents our newly proposed CRL taxonomy, which contains the most commonly used and highly cited methods and design patterns in the CRL research area, and which can also be used as a Trajectory Map (see Figure <ref>) of designing collaborative reinforcement learning systems, as follows. Researchers may utilise our Trajectory Map to develop their architecture as they go from the top Design Patterns to the next, until the most detailed Algorithmic Models is selected. In the Map, the first part suggests Design Patterns, which are the most popular structure of human-AI collaborative frameworks in the CRL domain. These include cognitive systems engineering (CSE) <cit.>, Bosch's framework <cit.>, the Coactive design pattern <cit.> and Schmidt's framework <cit.>. The second part are the Collaborative Levels and Parties, while the Third part covers Collaborative Capabilities, which include understanding, communication, commitments, and institutions. The forth part is Interactive Methods, including explicit and implicit interaction methods, as well as multi-module interaction modes <cit.>. The last part reflects Algorithmic Models, which contains reward-based methods <cit.>, value-based methods <cit.>, policy-based methods <cit.>, and exploration-process-based methods <cit.>. This taxonomy could be used as a systematic modelling tool for researchers and practitioners to select and improve their new CRL designs. They could choose an archetype in Design Patterns for the overall architecture at start. Then, they could select a Collaborative Level and the numbers of the Parties in the collaboration. After that, they could select the Collaborative Capabilities that the AI agents should have, and select suitable Interactive Methods and Algorithmic Models that can meet the requirements of specific tasks. If researchers wish to learn about the most advanced technology developed in the last decade, they could check the classification we provide in Table <ref>. § FUTURE WORK RECOMMENDATIONS Reinforcement learning seems to have reached a plateau after experiencing a rapid development. It is difficult to improve the efficiency of AI agents in a complex environment without clear feedback. The research community has proposed some collaborative methods to overcome these obstacles. For example, humans deliver feedback to AI agents through hardware or sensors to improve algorithms efficiency; AI agents provide humans with explanations of decisions to improve the credibility of algorithms. However, research in this area is only at the beginning stage, and there are many open challenges to be tackled. In the following sections, we recommend several promising future research directions in the field of Collaborative Reinforcement Learning (CRL). Combining Different Interactive Methods Develop more natural multi-feedback interactive methods by studying the advantages and disadvantages of different interactive methods. Single interactive methods would have higher requirements from humans and could be inefficient, whereas multi-modal interactive methods would lower interaction barriers and improve efficiency, providing users with a better interactive experience <cit.>. In the design patterns mentioned above, Combining Different Interactive Methods belongs to 'Augmentative Level collaboration'. It is the basis for the application of collaboration technology in real-life scenarios; it is also an important factor to improve user experience. Therefore, researchers could work on more advanced interaction modes based on this design concepts and applied to different scenarios. User Modelling It is important to build generic user models to enable the system to accept user feedback robustly. Such models could be used to build human-AI collaboration applications that reduce human fatigue by detecting and predicting human behaviour <cit.>, due to their ability of adapting to interaction channels and feedback types according to the user’s preferences. This would require empirical studies to find a way to map between user types and their preferred interaction channels and feedback types. In the patterns mentioned above, user modelling is one of the most important issues of 'Integrative Level collaboration'. Only with accurate models could humans and AI agents communicate without barriers. The ability of predicting each other's behaviour could generate trust. Understanding the unexpected situations that may occur for each participant could establish a more flexible relationship and improve the entire system's robustness. Due to the rapid development of AI in recent years, there is still a lot of user modelling work that has not been carried out. Researchers could build AI and user models based on the perspective of 'Integrative Level collaboration': communication, trust, and resilience. Lack of Human Collaborating Data and Evaluation Methods Human subjective is essential for improving CRL technology due to the inclusion of human involvement. However, there is limited research on human data collecting and evaluation <cit.>. This circumstance is caused by a variety of factors. For example, the expense of collecting human data is too high. Varied types of subjective human data need different techniques of collection, which is very challenging technically. The typical approach to this problem is to conduct more experiments in order to collect sufficient data. Moreover, create evaluation techniques by merging several disciplines (e.g. psychology). However, there are certain innovative ways that need our attention. Strouse et al. proposed a collaborating method without human data <cit.>, which is called Fictitious Co-Play (FCP). In this work, They train the AI agent to be the best reaction by a population of self-play agents and their previous checkpoints taken during training. It might spark innovative thinking for training AI agents. Safe Interactive RL Despite the empirical success of reinforcement learning algorithms, we have very little understanding of the way such 'black-box' models work. This means that the system cannot be responsible for their own decisions <cit.>. Therefore, how to establish a mechanism to protect human safety and avoid unknown discrimination becomes very important. Solving this problem is crucial for the use of interactive reinforcement learning in high-dimensional environments in the real world. In the patterns mentioned above, safe interactive RL is an important issue for the 'Integrative Level collaboration' and the 'Debative Level collaboration'. In terms of Integrative Level collaboration, how to ensure the safety of humans is one of the key factors for humans to trust AI. In terms of Debative level collaboration, when the decisions of humans and AI agents are inconsistent, how to protect the interests is not only an engineering issue, but also an ethical issue. This issue requires the joint efforts of multiple disciplines such as law, sociology, and ethics. Explainable Collaborative Reinforcement Learning Since reinforcement learning needs to be trained via environmental feedback, different sensors or human feedback will lead to significantly different outcomes. Besides, explaining why the AI makes the specific decision is critical for humans to trust AI and allocate essential tasks to the AI agent. Therefore, developing the explainable collaborative reinforcement learning algorithm is required. Explainable collaborative RL approach aims to help people understand how the agent perceives the environment and how it makes decisions by improving the transparency of the model. Improving the transparency of the model not only allows humans to find better ways to train the agent, but also allows them to trust the agent to collaborate <cit.>. High-dimensional Scenarios At present, both reinforcement learning and collaborative reinforcement learning have a relatively limited application scenario. The majority of experiments were conducted in virtual gaming environments. The agent is confronted with a modest dimensional environment, and there is a small number of variables. This complicates the implementation of CRL in high-latitude real-world scenarios. There is an algorithm that efficiently transfers human behaviour to the agent through an autoencoder <cit.>. Another possibility is to utilise crowd-sourcing to send human input <cit.>. However, this kind of issue requires more investigation. Fast Evaluation of Human-Acts In the real scenario, many tasks need to be executed with rapid response, such as stock trading and autonomous driving. How to quickly evaluate the reaction of the other party to make the next decision is crucial. Some studies are currently focused on this area, and, in particular, exploring the methods of evaluating human behavior through visualisation models <cit.>, but more forms of evaluation of human behaviour are needed. Dynamic Mental Models The cooperative model requires a dynamic mental model from both humans and AI agents, as they constantly observe and learn about each other. It also needs to update strategies timely during the learning process <cit.>. With the increase of collaboration experience, it would be useful to inject experience into the new learning process. At this stage, the implicit human prior knowledge needs to be gradually transformed into explicit experience guidelines for future tasks. Therefore, establishing a dynamic mental model may greatly promote the development of human-AI collaboration. In the patterns mentioned above, dynamic mental models are critical to the three levels, making dynamic adjustments according to different collaboration levels, which has a great effect on reducing power consumption and improving efficiency <cit.>. This is a huge challenge for researchers. A potential direction is to design general dynamic mental models according to the taxonomy we provide in this paper. § CONCLUSIONS In this paper, we have presented a survey of Collaborative Reinforcement Learning (Collaborative RL, or CRL) to empower the research into human-AI interactions and cooperative designs. This analysis resulted in us proposing a new CRL classification method (see Table 1), called CRL Design Trajectory Map (see Figure <ref>) and a new CRL taxonomy (see Figure <ref>) as a systematic modelling tool for selecting and improving new CRL designs. Researchers could use our Trajectory Map to design a CRL system from scratch or use parts of it according to their needs to refine their system. For example, researchers could select their desired system structure in Human-AI Collaborative Design Pattern, identify and satisfy the requirements of different components in Collaborative Levels & Parties and Collaborative Capabilities, and select different design components in Algorithmic Models and Interactive Methods. This is a comprehensive design approach from top to bottom and from macro to micro. Summarising, through this survey, we provide researchers and practitioners with the tools to start improving and creating new designs for CRL methods. unsrt
http://arxiv.org/abs/2405.08926v1
20240514194633
Atmospheric muons and their variations with temperature
[ "Stef Verpoest", "Dennis Soldin", "Paolo Desiati" ]
astro-ph.HE
[ "astro-ph.HE", "hep-ph" ]
1]S. Verpoestcor1 [1]organization=Bartol Research Institute and Dept. of Physics and Astronomy, University of Delaware, city=Newark, postcode=19716, state=DE, country=USA [cor1]Corresponding author verpoest@udel.edu 2,3]D. Soldin [2]organization=Department of Physics and Astronomy, University of Utah, city=Salt Lake City, postcode=84112, state=UT, country=USA [3]organization=Karlsruhe Institute of Technology, Institute of Experimental Particle Physics, city=Karlsruhe, postcode=D-76021, country=Germany 4]P. Desiati [4]organization=Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin–Madison, city=Madison, postcode=53703, state=WI, country=USA Seasonal variations of atmospheric muons are traditionally interpreted in terms of an effective temperature that relates the atmospheric temperature profile at a given time to the dependence of muon production on atmospheric depth. This paper aims to review and generalize the treatment of muon production and effective temperature that has been used to interpret seasonal variations of atmospheric muons by many experiments. The formalism is developed both in integral form – for application to compact detectors at a fixed depth that record all muons with > – and in differential form – for application to extended detectors like IceCube, KM3NeT, and Baikal-GVD, where the rates are proportional to energy-dependent effective areas. This paper combines and builds upon the work on seasonal variations of atmospheric muons performed by Thomas K. Gaisser together with various collaborators over the last decade. It is partly based on and expanded from two paper drafts which were in preparation but which remain unfinished due to his passing. § INTRODUCTION Atmospheric muons come primarily from the decay of charged pions and kaons produced by cosmic-ray interactions in the upper atmosphere. In the energy range where the interaction lengths of the parent mesons are comparable to their decay lengths, higher temperatures lead to lower density and, therefore, to higher muon production rates. The degree of correlation evolves over an energy range defined by the critical energies for pions ϵ_π≈ 115 GeV and kaons ϵ_K ≈ 857 GeV, where the numerical values correspond to a temperature of 220. The correlation with temperature is small at low energies, E_μ < ϵ_π, where most mesons decay, and becomes fully correlated for muons energies above several TeV. Because of the difference in their critical energies, the π^±/K^± production ratio is an important factor in this study. Prompt muons from the decay of charmed hadrons and neutral vector mesons remain uncorrelated with temperature below their critical energies ∼ 10^7 GeV <cit.>, but make a negligible contribution to the overall rates and are therefore not considered. Seasonal variation of atmospheric muons has been a benchmark measurement of every underground detector since the classic paper <cit.> on muons in a salt mine near Ithaca, New York. At a depth of 1574 m.w.e., muons in that detector required > 440 at production to reach the detector. Measurements with experiments at the Laboratori Nazionali del Gran Sasso (LNGS), starting with MACRO <cit.> and LVD <cit.>, have a variable overburden corresponding approximately to > 1.5±0.2 at production depending on the exact location of the detector. More recent observations at LNGS include BOREXINO <cit.>, GERDA <cit.>, and OPERA <cit.>. The MINOS far detector at a depth of 2100 m.w.e. in the Soudan mine <cit.> detects muons with > 730 at production. There are also measurements with shallower experiments such as the MINOS near detector <cit.> and NOvA <cit.> at Fermilab that correspond to ≳50. The relation between measured muon rate R and atmospheric temperature is conventionally quantified by a correlation coefficient, α_T, Δ R/⟨ R ⟩ = α_TΔ T_eff/⟨ T_eff⟩, where is the effective temperature characterizing the atmospheric temperature profile. The Δ in eq:alpha_T indicates the variation with respect to the yearly average muon rate ⟨ R ⟩ and effective temperature ⟨ T_eff⟩. Several experimental measurements of the temperature correlation coefficient show that it varies from 0.2 to 0.95 in the energy range from 20 to ∼𝒪(TeV) <cit.>. This paper is organized with an initial section relating the muon rate at the detector to the production spectrum of muons as a function of the atmospheric depth, both for compact detectors like MINOS and those at LNGS, and for the deep neutrino telescopes that span a large range of depths. The focus is on an analytic approximation for the muon production spectrum, but two alternative approaches are considered. The next section relates the muon production spectrum to weights for calculating the effective temperature by weighting the temperature profile at each depth. It also includes a comparison of the weights of this work with those defined by Grashorn et al. in Ref. <cit.> and used by many measurements. This is followed by a discussion on the correlation coefficient and its dependence on energy and zenith angle. The following section demonstrates how the correlation coefficient is a probe of the atmospheric K/π-ratio. Finally, we comment on the impact of multiple muon events and nuclear primaries on the rate calculation. An Appendix provides details of the hadronic cascade equations and their approximate solutions with details of how the lepton spectra are calculated. § MUON RATE CALCULATIONS The evolution of a cascade of particles in the atmosphere can be described by the coupled cascade equations <cit.> dϕ_i(E,X)/d X = -ϕ_i(E,X)/λ_int, i(E) -ϕ_i(E,X)/λ_dec,i(E, X) + ∑_j∫_E^∞d E_j d n_j(E_j) → i(E)/d Eϕ_j(E_j,X)/λ_int, j(E_j) + ∑_j∫_E^∞d E_j d n^dec_j(E_j) → i(E)/d Eϕ_j(E_j,X)/λ_dec, j(E_j, X). Here, ϕ_i(E, X) E is the flux of particles of type i at atmospheric slant depth X with energies in the interval E to E + E. The first two terms on the right-hand side of eq:cascade_equations are loss terms as a result of interaction and decay of particles i, governed by the interaction and decay lengths λ_int and λ_dec. The last two terms are source terms for the production of particle type i due to the interaction and decay of particles of type j, where dn/dE are the inclusive particle production spectra. For an observation height h_0 in the atmosphere, the slant depth X in Eq. (<ref>) is given along the trajectory l of the central core of the cascade by X(h_0, θ)=∫_h_0^∞ dl ρ_air(h(l,θ)), where the mass density of air, ρ_air, is typically a function of the atmospheric height h(l,θ), and θ is the zenith angle of the trajectory. Because the density is directly related to temperature, the fluxes of particles in air showers are sensitive to temperature fluctuations in the atmosphere. The inclusive production spectrum of muons, differential in muon energy E_μ and atmospheric slant depth X, is then given by P(,θ,X)=ϕ_μ(,θ,X)/ X, when solving the cascade equations starting from the total primary nucleon flux ϕ_N at the top of the atmosphere. The flux of muons differential in energy at the surface is obtained from the integral over the production spectrum, ϕ_μ(E_μ, θ) = ∫_0^X_O X P(, θ, X). Due to the relation to the atmospheric density profile, the muon production spectrum implicitly depends on the temperature T(X) at slant depth X. The rate of muons with energy from a direction corresponding to a zenith angle θ in a detector with effective area A_eff(,θ) is given by R(θ) = ∫ X∫_^∞ (,θ) P(,θ,X). For a compact detector at a depth large compared to its vertical dimension, the effective area is simply its projected physical area at the zenith angle θ averaged over the azimuth angle. In this case, R(θ) =(θ) ∫ dX∫_ d P(,θ,X) = (θ)∫ dX (, θ, X) = (θ) I(,θ), where I(,θ) is the integral muon flux for θ, (, θ, X) is the integral version of the production profile[In previous works, the integral muon production spectrum has sometimes been written as P(>E_μ, θ, X), which represents the differential production spectrum P integrated over all energies above some minimum energy E_μ. For clarity, we choose in this work to use instead the notation (, θ, X) ≡∫_^+∞ P(, θ, X).], and is the energy threshold for a muon to reach the detector at this angle. In both cases, the total rate, R, is given by integrating over the solid angle Ω, R = ∫ R(θ) Ω. The differential version (Eq. <ref>) is appropriate for a geometrically extended experiment like IceCube where the effective area depends on muon energy, for example, because higher energy is required for a muon at a large angle to reach the lower part of the detector. Furthermore, such experiments are sparsely instrumented which may cause a fraction of muons to fail to pass the trigger or subsequent analysis steps, an effect which usually diminishes with increasing energy. For a compact detector at a given depth, e.g. MACRO, MINOS, and NOvA, any muon with sufficient energy to reach the depth of the detector can be recorded if it passes through the detector. In this case, the integral version of the production profile as in Eq. <ref> is appropriate (and has been used traditionally). In the following sections, three approaches to obtaining the muon production spectrum are described. The first approach consists of an approximate analytical solution to the cascade equations including the pion and kaon channels. A second approach utilizes a numerical solver of the cascade equations which includes all relevant channels. A third and conceptually different approach is based on a parameterization of muon production profiles in extensive air showers, which are integrated over the flux of primary particles. For the purpose of illustration, a hypothetical cylindrical detector with a radius of 5 and a height of 20 at a depth of 2000 is used. For a compact detector at such a large depth, the effective area is given by the projected physical area. The average minimum energy that a muon requires to reach the detector is estimated from the approximation given in Gaisser:2016uoy. We consider these values as a sharp cutoff above which muons are detected and below which they are not[Note that because of the steeply falling spectrum of primary nucleons and consequently of atmospheric muons, an accurate description of the threshold region is crucial for accurate rate calculations for real detectors.]. The numerical values of the effective areas and threshold energies used in the calculations are given in tab:detector. The muon rates are calculated using daily temperature data of the South Pole atmosphere for the year 2012 obtained from the Atmospheric Infrared Sounder (AIRS) on board of NASA's Aqua satellite <cit.>. AIRS is capable of measuring the geopotential height and temperature in the atmosphere with an accuracy of 1 over 24 pressure layers between 1hPa and 1000hPa, even under cloudy conditions. Assuming an ideal gas law, the corresponding atmospheric density profile, ρ_air, can be obtained using the AIRS pressure and temperature data. All calculations are performed using the primary nucleon spectrum from Tom Gaisser's H3a flux <cit.>. §.§ Approximate analytical solution of the cascade equations The differential production profiles obtained from the cascade equations in the limits of low and high energy are repeated here from Ref. <cit.> and presented in detail in <ref> of this paper. The low- and high-energy regime is defined relative to the critical energies of the parent mesons of the muons, given by ϵ_π = m_π c^2/c τ_πRT/Mg≈115 T/220 for pions, and equivalent for ϵ_K. Here, m_π and τ_π are the mass and lifetime of the pion, g is the acceleration of free fall, R the molar gas constant, M the mean molar mass for air, and T the atmospheric temperature. For muons with ϵ_π, P(,θ,X)≈ϕ_N() e^-X/Λ_N/λ_N ×[ Z_Nπ(1-r_π^γ+1)/(γ+1)(1-r_π) + 0.636 Z_NK(1-r_K^γ+1)/(γ+1)(1-r_K) ], and for muons with ϵ_K P(,θ,X) ≈ϕ_N() ×[ ϵ_π/Xcos(θ)(1-r_π^γ+2)/(1-r_π)(γ+2)Z_Nπ/1-Z_NNΛ_π/Λ_π - Λ_N × (e^-X/Λ_π-e^-X/Λ_N) + 0.636 ϵ_K/Xcos(θ)(1-r_K^γ+2)/(1-r_K)(γ+2)Z_NK/1-Z_NNΛ_K/Λ_K - Λ_N × (e^-X/Λ_K-e^-X/Λ_N)], where r_π = m_μ^2/m_π^2, and λ and Λ are atmospheric interaction and attenuation lengths respectively. These equations are obtained by integrating solutions of the hadronic cascade equations (eq:cascade_equations) for charged pions and kaons to get the spectrum of leptons from π^± / K^±→μ^± + ν_μ (ν̅_μ), given a primary nucleon flux , with γ the integral spectral index. The integral over the primary flux is related to the primary flux evaluated at the energy of the muon by spectrum-weighted moments Z_Nh. The Z-factors are given by Z_Nh = ∫_0^1 x^γ n_N→ h/ x x, where x=E_h / E_N. This definition assumes Feynman scaling for the particle production and a constant spectral index γ, so that the spectrum-weighted moments are constants. Such an approximation is realistic because of the steepness of the primary spectrum and the threshold of the deep detector, which combine to limit the range of relevant primary energies. An approximation valid for all energies can be obtained with the form P(,θ,X)= Low/1 + Low/ High, where Low refers to eq:mu-prod-low and High refers to eq:mu-prod-high. The approximations are made separately for pions and kaons. From eq:mu-prod-low we see that P_π(X) = A_πμ(X)/1+B_πμ(X)cos(θ)/ϵ_π(T) with A_πμ(X)=Z_Nπ/λ_N(γ+1)1-r_π^γ+1/1-r_πe^-X/Λ_N, and from eq:mu-prod-high B_πμ(X)=γ+2/γ+1 1-r_π^γ+1/1-r_π^γ+2 X/Λ^* e^-X/Λ_N/e^-X/Λ_π-e^-X/Λ_N, where Λ_π^* = Λ_π×Λ_N/(Λ_π-Λ_N) is a combination of the attenuation lengths for nucleons and pions. The equations for the kaon channel have the same form with A_Kμ(X) multiplied by a factor of 0.636, the branching ratio for the decay K^±→μ^± + ν_μ(ν̅_μ) <cit.>. The total differential production spectrum is P(,θ,X)=ϕ_N(){P_π(X)+P_K(X)}, The equations assume a power law primary spectrum, where ϕ_N() = C_N×^-(γ+1) is primary nucleons per . When the low-energy form eq:mu-prod-low is integrated to get the corresponding integral production profile, . The high-energy form (eq:mu-prod-high) has an additional factor of muon energy in the denominator, so at high energy. Applying the approximation of Eq. (<ref>) then leads to (,θ,X) = ϕ_N() ×A_πμ(X)/γ+(γ+1)B_πμ(X)cos(θ)/ϵ_π. This form (plus the corresponding term for kaons) provides the production profile that can be inserted into Eq. <ref> to get the inclusive rate of muons (assuming an that does not depend on muon energy). The production profile for a specific and cos(θ) is shown in fig:Pmu. The above equations are for μ^+ +μ^-. The corresponding equations for ν_μ+ν_μ have the same form with the meson decay kinematic factors like (1-r_π^γ+1) and (1-r_K^γ+2) replaced by (1-r_π)^γ + 1 and (1-r_K)^γ+2, respectively <cit.>. The constants used in the calculations are given in tab:constants. More detail can be included in the calculation by taking into account the non-scaling behavior of hadronic interactions and gradual changes of the primary spectral index. To first approximation, this is done by introducing energy-dependent spectrum-weighted moments as in Gondolo:1995fq. For this work, we compared a calculation using the constant values from Gaisser:2016uoy based on Sibyll 2.3 (tab:constants), and a calculation using energy-dependent values obtained from Sibyll 2.3c <cit.> (see fig:not-constants). While the calculation with energy-dependent values gives a higher rate, the difference is nearly constant with the relative variations throughout the year deviating by less than 0.1% (see <ref>). In fig:rates, we show the daily rate calculated with the energy dependent parameters, compared to the rates obtained with the other methods considered. The calculated angular distribution of the events is shown in fig:angular. It is possible to check the accuracy of eq:low-higheq:piofX by expanding the exact solution of the cascade equations in eq:exact. A comparison of predictions given by the analytical approximation described here to a full numerical solution as described in the following section was presented earlier in Gaisser:2019xlw. §.§ Numerical solution of cascade equations The approximate analytical solutions of the cascade equations are based on various simplifications that can introduce uncertainties on the atmospheric muon fluxes. In order to estimate these uncertainties Monte-Carlo simulations or numerical solutions of the coupled cascade equations are required. The software package MCEq (Matrix Cascade Equations) <cit.> provides precise numerical solutions of the cascade equations with a level of detail comparable with current Monte Carlo simulations. To achieve this, the cascade equations are expressed in matrix form to make use of modern implementations of linear algebra algorithms. The calculations rely on several input parameters, such as the initial cosmic-ray flux and the atmospheric density profile. Further details can be found in Ref. <cit.>. An extension of this approach is realized with the Muon Intensity Code (MUTE) <cit.> which accounts for muon propagation in dense media to estimate muon fluxes in deep-underground experiments. However, in this work the simple approach based on effective areas and energy thresholds, as described in Section <ref> (Table <ref>), is used to obtain the expected muon flux in a hypothetical cylindrical detector of radius 5, height 20, and depth 2000. The atmospheric muon flux is determined with MCEq, using Sibyll 2.3c, at different atmospheric depths, X_i, assuming the primary nucleon flux from H3a and a daily atmospheric temperature and density profile at the South Pole derived from 2012 AIRS data. Subtracting the muon spectrum at X_i+1 from the spectrum at X_i for all i then directly yields the muon production spectrum P(,θ,X), which is shown in fig:Pmu. The expected muon rate, R(θ), is then calculated according to Eq. (<ref>). Analogously to the analytical approach, integration over the solid angle yields the total muon rate in the detector, as described in Eqs. (<ref>) and (<ref>). The resulting total muon rate is shown in fig:rates and the corresponding angular distribution in fig:angular. §.§ Parameterization of Monte Carlo cascades An alternative approach consists of integrating average muon production profiles in air showers over the primary cosmic-ray flux. A parameterization of such profiles based on simulations and its applications are described in Gaisser:2021cqh. The differential muon production spectrum P(,θ,X) is given by P(,θ,X)=∫_g(,E_0,θ,X, T) ϕ_N(E_0) dE_0 with g(,E_0,θ,X, T) = d/ d( dN(,E_0,A,θ,X,T)/ dX). Here dN(,E_0,A,θ,X,T)/ dX is the mean number of muons with energy > produced per g/cm^2 as a function of slant depth X in a cosmic-ray air shower initiated by a primary nucleus with mass number A, primary energy E_0, and zenith angle θ, where the atmospheric temperature at X is given by T. It is a product of the derivative of the Gaisser-Hillas () function <cit.>, often used to fit the longitudinal development of air showers and its derivative here interpreted as the longitudinal production of mesons in the cascade, multiplied by a decay factor that provides the temperature dependence of the decay probability of pions and kaons to muons, and a threshold factor: N/ X(,E_0,A ,θ,X,T)= N_ max exp((X_ max-X)/λ) × X_ max-X/λ (X-X_0) (X-X_0/X_ max-X_0)^(X_ max-X_0)/λ × F(,T) (1-A/E_0)^5.99. The parameters N_max, X_max, λ, X_0 are the free parameters appearing in the original function, which were parameterized in Gaisser:2021cqh in terms of E_0, A, and based on fits to muon production profiles obtained from air-shower simulations. For the parameterization, a scaling form depending on E_0/(A ) is used so that only the primary spectrum of nucleons is required in Eqs. <ref> etc. The decay factor is F(,T) = f_π/1+(f)cos(θ) X/r_πλ_πϵ_π(T) +f_K/1+(f)cos(θ) X/r_Kλ_K ϵ_K(T), with f≥ 1, a factor fitted from the simulations that gives the mean energy of all muons with energy greater than ; r_π=0.79 and are the fraction of the parent meson momentum carried by the muon, and λ_π=110 and λ_K=122 are the meson interaction lengths. The normalization factors f_π and f_K of the pion and kaon component are defined in terms of the average momentum they carry away in interactions of nucleons in the atmosphere, taking into account the branching ratio for the muon decay channel for charged kaons. This average momentum fraction is equivalent to the spectrum-weighted moment of eq:Z evaluated for . Requiring the sum of the normalization factors to be equal to one, they are defined as and , where numerical values from Gaisser:2016uoy were used for Z^γ=1. The inclusive muon production profile calculated according to eq:PofEmueq:params is shown in fig:Pmu. The calculated rates and zenith distribution are shown in fig:ratesfig:angular. For our calculations, we use the fit parameters given in Table 1 from Gaisser:2021cqh for the four functions fitted to Monte Carlo for N_ max, X_ max, λ, and X_0. The integral over slant depth of Eq. <ref> is equivalent to the Elbert formula <cit.> approximation for the average number of muons per shower for a given zenith angle <cit.>: ⟨ N()⟩ ≈ A K/cos(θ) ( E_0/ A )^α_1 (1-A/E_0)^α_2, where A is the mass number of a primary nucleus of total energy E_0[The Elbert formula traditionally uses the notation ⟨ N_μ(>)⟩, written here instead as ⟨ N()⟩.]. The dependence on the ratio A /E_0 follows from the superposition approximation, in which incident nuclei are treated as A independent nucleons each of energy E_0/A. The threshold factor, i.e. the last factor in eq:ElbertFormula, is the same as for Eq. <ref>. The benefit of integrating eq:formula over eq:ElbertFormula is the dependence on atmospheric temperature of the former. Comparisons between the approach presented in this section and the analytic calculation from sec:analytic were shown earlier in Gaisser:2021bwj. Alternatively to the parameterization of production profiles obtained from simulation as discussed in this section, one could use MCEq (sec:mceq) to obtain average production profiles in air showers by using it to solve the cascade equations with a single primary particle as the initial condition. This will increase the accuracy of the calculation in various ways, for example, by taking into account all relevant muon production channels and including the energy dependence of the inclusive cross sections, as well as through the implementation of the curved geometry relevant for more horizontal directions. § EFFECTIVE TEMPERATURE The variation of muon rate with atmospheric conditions is ordinarily described in terms of correlation with an effective temperature parameter. The effective temperature characterizes the atmospheric temperature profile by averaging it with appropriate weights related to the muon production spectrum. Different definitions of effective temperature have been used in the literature. A definition that has traditionally been used can be obtained by taking the derivative of the rate in eq:rate with respect to temperature. The change in rate is obtained by integrating the change in atmospheric temperature over depth, i.e. Δ R(θ) = ∫dX∫d A_eff(, θ) dP(, θ, X)/dTΔ T(X). Defining Δ T(X) = T(X) - T_eff and setting Δ R = 0 for an isothermal atmosphere where T(X) = T_eff results in the following definition: T_ eff(θ) = ∫ dX∫ d (,θ) T(X) dP(,θ,X)/ dT/∫ dX∫ d (,θ) dP(,θ,X)/ dT. The total effective temperature is the weighted average of eq:Teff over the zenith distribution. The corresponding integral form is T_eff (θ) = ∫dX T(X) d(, θ, X)/dT/∫d X d(, θ, X)/dT. It applies to compact detectors for which the effective area cancels in eq:Teff at each zenith angle. For the analytic inclusive form of the pion channel spectrum from sec:analytic, for example, T(X) d P (, θ, X)/d T = A_πμ(X) B_πμ(X) cos(θ) / ϵ_π(T)/[1 + B_πμcos(θ)/ ϵ_π (T) ]^2. and T(X) d(, θ, X)/d T = ϕ_N() ×A_πμ(X) (γ +1) B_πμ(X) cos(θ)/ϵ_π(T)/[γ + (γ +1)B_πμ(X)cos(θ)/ϵ_π(T)]^2. An early implementation of this approach, presented in Grashorn:2009ey, is used in the analysis of MINOS, among others. For comparison with the existing literature, it is necessary to write the effective temperature in terms of weights: T_eff(θ) = ∫ X T(X) W(X)/∫dX W(X)≈∑_i δln(X_i) T(X_i) X_i W(X_i)/∑_i δln(X_i) X_i W(X_i). The second form is motivated by the fact that atmospheric temperatures are commonly tabulated in quasi-logarithmic intervals of depth, so the integrations in this work are done logarithmically. From eq:TdPdT_int W(X) = ϕ_N() ×A_πμ(X) (γ +1) B_πμ(X) cos(θ)/ϵ_π(T)/T(X) [γ + (γ +1)B_πμ(X)cos(θ)/ϵ_π(T)]^2. The form obtained here differs from the one of Grashorn:2009ey, with the weights now depending on the temperature profile through the critical energies. The normalized weights are compared in fig:weights. Despite the difference in the calculations, the weights are similar, with only a slight shift deeper in the atmosphere for the present calculation. For the calculation of according to eq:Teff, with the parameterization of sec:param, the corresponding form for the decay factor is T(X) dF(, θ, X)/dT = f_π (f) cos(θ) X / r_πλ_πϵ_π(T)/[1 + (f) cos(θ) X / r_πλ_πϵ_π (T)]^2. To calculate the derivative of the muon production spectrum with respect to temperature with MCEq, first the production spectrum P(,θ,X) is determined as described in Sec. <ref>. In a second step, muon production spectra are derived for a local temperature change of T=1. This is done by changing each atmospheric layer i in the AIRS temperature and density profiles individually by 1 to obtain P̂_i(,θ,X_i) and P̂(,θ,X)= (P̂_1(,θ,X_1), …, P̂_n(,θ,X_n) ), where n is the total number of layers considered in the AIRS data. The derivative of the production spectrum is then constructed via the difference quotient as P(,θ,X)/ T =P(T+ T)- P(T)/ T =P̂(,θ,X)-P(,θ,X)/1. The resulting derivative of the production spectrum in terms of weights, W(X), is also shown in Fig. <ref>. Alternatively to the definition of eq:Teff, in which the atmospheric temperature profile is multiplied by the derivative of the muon production spectrum with respect to temperature, the effective temperature has been defined as a straightforward convolution of the temperature profile with the muon production spectrum, normalized to the muon rate for each angle <cit.>: T̃_eff(θ) =∫ dX T(X)∫ d (,θ)P(,θ,X)/∫ dX∫ d (,θ)P(,θ,X). A benefit of this definition is that the technical implementation is more simple compared to the derivative definition when using MCEq. A comparison of the daily effective temperature with the two definitions is shown in fig:Teff_comparison. The relative variations in the calculated rate throughout the year are plotted as a function of relative variations of effective temperature in fig:alpha_comparison. The derivative definition of , eq:Teff, minimizes the difference between calculated rates on days that have the same value of . Using the alternative definition of eq:Tefftilde, a separation is visible between the months in which the atmosphere cools versus when it warms. This so-called hysteresis has been reported earlier by IceCube using this definition of effective temperature <cit.>. § CORRELATION COEFFICIENT The relation between the variation of effective temperature and the variation of muon rate can be expressed in terms of a correlation coefficient as in eq:alpha_T. A theoretical expectation for the correlation coefficient as a function of zenith angle and threshold energy can be calculated by writing it in the following form: α_T^th(, θ) = T/I(,θ) dI(,θ)/ dT. Using the expression for the integral rate, eq:compact, together with the expression in eq:integralPmu, the theoretical correlation coefficient for the integral muon spectrum can be estimated. To do so, we assume relatively small deviations of T(X) from ⟨ T_ eff⟩. The result is shown for fixed T = 220 in fig:alpha_theo as a function of cos(θ) (see also Eq. <ref>). We limit the energy range at the lower end to 50 as at lower energies muon decay is expected to have a non-negligible impact. At energies above 10, the muon prompt component is expected to become important, which will lower the value of compared to the calculations including only contributions from pions and kaons <cit.>. A calculation of the theoretical using the weights of Ref. <cit.> is compared with a range of experimental results in Ref. <cit.>. Calculation of the correlation coefficient for the differential case can be carried out equivalently, but is less universal because it depends on the energy-dependent effective area, which is different for each detector. Experimental values of α_T are obtained by applying a linear fit to Δ R / ⟨ R⟩ as a function of Δ T_ eff/⟨ T_ eff⟩, where R and T_ eff are the measured event rate and the corresponding calculated effective temperature (e.g. per day) and the denominators are the average over the observation period (e.g. a year). In fig:alpha_comparison, we show correlation plots with calculated rates for the hypothetical detector introduced in sec:rate. Values obtained for the correlation coefficients differ little between effective temperature definitions. A larger difference is present between the methods based on cascade equations and the muon profile parameterization method. The good agreement between the analytic approximation and the MCEq calculation has been shown before, including for the case of seasonal variations of neutrinos <cit.>. In Gaisser:2021bwj, a comparison between the analytic approach and the parameterization suggests that the level of agreement between different calculations and experimental results depends on the energy range relevant to the detector. § RELATIVE CONTRIBUTIONS FROM PIONS AND KAONS The higher critical energy of kaons compared to pions results in a lower correlation with temperature for muons from kaon decay. This is illustrated in fig:alpha_K_pi_separate, where is determined separately for the kaon and pion component in the calculation, R = R_π + R_K, using the analytic approximation eq:Panalytic. As a result, the measured correlation coefficient depends on the relative contribution of pions and kaons to the production of muons. A measurement of the seasonal variations of the atmospheric muon rate is therefore a probe of the atmospheric kaon-to-pion production ratio . In Grashorn:2009ey, was defined in terms of the spectrum weighted moments Z_NK and Z_Nπ as r_K/π = Z_NK/Z_Nπ. The dependence of the correlation coefficient on the K/π ratio can be estimated straightforwardly from the analytic approximation of sec:analytic, as the dependence on the spectrum weighted moments Z_Nπ and Z_NK is explicit in the parameters A_πμ and A_Kμ of eq:Api. In this case, the correlation depends only on the ratio of Z_NK and Z_Nπ. fig:alpha_Kpi_theo shows the theoretical expectation for α_T^th for different cos(θ) from eq:alpha_theo, calculated as a function of assuming Z_NK and Z_Nπ to be independent of energy, as in eq:Z. The nominal value of K/π ratio is in this case taken to be r_K/π = 0.0109/0.066 = 0.165. In Fedynitch:2019bbp, a modified K/π ratio was defined in terms of two weights w_π and w_K which scale the inclusive particle production spectrum, r^⋆_K/π = Z_NK^⋆/Z_Nπ^⋆ = w_K Z_NK/W_π Z_Nπ = w_K/w_π r_K/π. When using energy-dependent Z-factors or comparing different methods of calculating , it is easier to express as a function of w_π and w_K rather than the value of itself. For calculations including only muons from the decay of π^± and K^±, will depend only on the ratio of the weights. In a full calculation including contributions from other channels, such as performed with MCEq, this simple relation breaks down. In fig:alpha_Kpi, a full calculation of of the expected for the detector of tab:detector is shown as a function of w_π/w_K for the analytic approximation, MCEq, and the parameterization. For the latter, the weights entered in the calculation of f_π = (w_π Z_Nπ^γ=1) / (w_π Z_Nπ^γ=1 + 0.635 w_K Z_NK^γ=1), with Z_Nπ^γ=1 the energy-independent spectrum weighted moment for γ=1, as described in sec:param. For MCEq, the dependence was approximately estimated by scaling the production spectra of muons produced by pions and kaons with w_π and w_K, respectively. The calculation of the effective temperatures and is then repeated, as described in Section <ref>, with the scaled distributions. Determining the experimental value of α_T is relatively insensitive to the assumed value of , as the dependence in the calculation of the effective temperature mostly cancels out. By comparing the experimental result to the calculated correlation coefficient, it is possible to measure for nucleon-nucleon interactions at median primary energies which are typically between 10-100 times the muon threshold energy at production, as illustrated in fig:response_curve. Preliminary results from IceCube were shown in Desiati:2011hea. An alternative calculation of the relation between α_T and r_K/π was shown earlier in Grashorn:2009ey and used by other experiments such as Borexino <cit.>. § MULTIPLE MUON EVENTS AND NUCLEAR PRIMARIES The traditional rate calculation as presented in sec:rate is based on the inclusive atmospheric muon flux. A shortcoming is that it does not take into account that muons produced in the same shower arrive at the detector simultaneously. While the muons arriving in bundles contribute individually to the calculated muon intensity, in realistic detectors they will often be indistinguishable, making the event rate lower than what is predicted from the calculation of eq:rtheta. An estimate of the effect can be obtained for compact detectors by modifying the calculations presented in sec:param. Combining eq:compacteq:PofEmueq:params, the traditional rate calculation can be written as R = I() = ∫_ E_0 ϕ_N(E_0) ⟨ N(, E_0)⟩, where ⟨ N ⟩ is the mean number of muons with energy above produced by a nucleon with energy E_0, and we omit the θ-dependence for conciseness. Writing the average as , with p(n) the probability for a nucleon to produce a bundle of n muons, shows explicitly that multiple muons get accounted for separately in the calculation rather than as a single event. Replacing this by the probability to have at least one muon above threshold per primary nucleon gives the expected intensity of bundles of muons with one or more muons above , I_bundle () = ∫_ E_0 ϕ_N(E_0) ∑_n=1^∞ p(n|⟨ N⟩). Assuming the multiplicity to follow a Poisson distribution[Forti:1990st finds that the multiplicity is described better by a negative binomial distribution.], the sum is given by 1 - e^-⟨ N ⟩. Another effect which will decrease the event rate compared to eq:rtheta is the fact that a fraction of the primary nucleons arrive at the Earth bound in nuclei, which are more likely to produce higher multiplicity bundles of muons arriving simultaneously. To take this into account, we can integrate over a realistic flux model, such as the H3a model, I_bundle () = ∑_A ∫_A E_A ϕ_A(E_A) ∑_n=1^∞ p(n|⟨ N(, E_A, A)⟩), where ϕ_A is the differential flux of element A and the sum runs over the different primary nuclei in the flux model. Note that we still assume that the energy in the nucleus is divided evenly over the A nucleons. The expectation ⟨ N ⟩, which depends on the atmospheric temperature profile, can be estimated by integrating the parameterized production profile eq:formula, ⟨ N ⟩ (, E_A, A, θ, X, T) = ∫ X N/ X (, E_A, A, θ, X, T). The effect of taking muon multiplicity and a realistic nucleus flux into account is shown in fig:R_multimu. Performing the calculation using the total nucleon flux but taking into account multiple muon events decreases the expected rate by close to 10%. Taking into account also the mass composition of primary nuclei decreases the expectations by another 10%. It is of interest to examine how this modified rate calculation affects the expected correlation coefficient. The correlation plot including different rate calculations is shown in fig:alpha_multimu. Here, the effective temperature is taken to be the same in all cases, i.e. it is given by eq:Teff. This shows how the standard approach of comparing measured rates to the calculated may cause an underestimation of . This may in turn lead to inaccuracies in the determination of r_K/π. We note that this is a simplified estimate of this effect. A more accurate calculation can be obtained replacing the parameterized muon production profiles by production profiles obtained by using MCEq to solve the cascade equations for individual primary nuclei, or by performing a full simulation of the problem. This is especially important for geometrically extended detectors, where the energy threshold region needs to be treated in more detail. § SUMMARY The flux of atmospheric leptons varies with the seasons as the atmosphere contracts and expands, which influences the decay probability of parent mesons. The observation of this variation in the muon rate of underground detectors has a long history, and is usually analyzed in terms of its correlation with an effective temperature which is a weighted average of the atmospheric temperature profile. The magnitude of the correlation is expressed in terms of a correlation coefficient, which is sensitive to properties of the hadronic interactions in the atmosphere, specifically the kaon-to-pion production ratio. The expected rate of muons can be calculated by integrating over the muon production spectrum multiplied by the effective area of the detector. An important difference exists between large-volume detectors where the effective area is energy dependent, and compact detectors at large depth, which can be approximated as energy independent (except for the dependence of the muon energy threshold on the zenith angle). Various approximations for calculating the muon production have been presented in the literature, each with their own advantages and disadvantages. We have considered here an approximate analytical solution to the atmospheric cascade equations, a code which numerically solves the cascade equations, and an approach where one integrates the muon production spectrum in individual air showers over the primary flux. Furthermore, different definitions of effective temperature have been used in the literature. A so-called derivative definition of the effective temperature, eq:Teff, follows naturally from the formalism, but is less straightforward to calculate than the simple average of the atmospheric temperature profile weighted by the muon production spectrum which has alternatively been used. In this work, we have compared several of these different methods and definitions, and showed how they lead to different predictions of the correlation coefficient . We have also demonstrated the relation between the and the kaon-to-pion production ratio. Finally, the relevance of multiple muon events and nuclear primaries was discussed, which are both not taken into account in the standard approach of an inclusive flux calculation from the total primary nucleon flux. The treatment of seasonal variations of the atmospheric neutrino flux was not treated explicitly in this paper but can be carried out equivalently. With neutrinos originating dominantly from kaon decay above several hundred GeV, the temperature correlation is expected to be smaller compared to muons up to energies of several TeV <cit.>. The different kinematics of neutrino production in the atmosphere thus make it possible to probe the K/pi ratio in an independent way using the same observatory. The feasibility has been demonstrated by the recent observation of seasonal variations of atmospheric neutrinos by IceCube <cit.>. Acknowledgments: We thank our colleagues A. Fedynitch, S. Tilav, and D. Seckel for helpful discussions related to this work. SV acknowledges funding from the National Science Foundation (NSF) grant #2209483. § CASCADE EQUATIONS AND THEIR APPROXIMATE SOLUTIONS A simplified form of the cascade equation for the inclusive spectrum of charged pions in the atmosphere Π(E,X) is <cit.> dΠ/ dX= -Π (E,X)(1/Λ_π+ϵ_π/ E X cos(θ)) +Z_Nπ/λ_Nϕ_N(E) e^-X/Λ_N, with a similar equation for the charged kaon channel. The equation has two loss terms. The first is from pion interactions in the atmosphere where Λ_π>λ_π, int is an attenuation length for pions that accounts for their regeneration. The second is the pion decay term, which depends on temperature, as in eq:epsilon_pi. X is the atmospheric slant depth along a direction with zenith angle θ, and the solution applies to a boundary condition at the top of the atmosphere where Π(E,X=0) = 0 and ϕ_N(E) is the spectrum of nucleons evaluated at the energy of the pion. This form holds for a power-law spectrum of primary nucleons and for production cross sections that depend only on the ratio x_L of the lab energy of the secondary particle to that of the parent. In this case, the energy-dependence of the production of the secondary is represented by the spectrum weighted moment, which for charged pions is Z_Nπ=∫_0^1(x_L)^γ-1F_Nπ(x_L)dx_L, with F_Nπ(x_L=E_π/E_N) the dimensionless inclusive particle production spectrum F_Nπ = E_π/σ_N, air dσ_N, air→π/ dE_π=E_π dn_π(E_π,E_N)/ dE_π, . which follows from the inclusive cross section σ_N, air→π, where σ_N, air is the inelastic nucleon-air cross section. In application of this approximation, it is important to include all intermediate channels in the calculation of the spectrum weighted moments. Especially important, for example, is p+ air→Λ + K^+ + xxx, which has an important influence on the muon charge ratio and on the energy dependence of the kaon channel in general <cit.>. Comparison <cit.> of the approach given here with MCEq <cit.> that includes all intermediate channels shows only small differences, see also sec:ratesec:alpha. Generalizations to include non-scaling behavior of the production cross sections and energy-dependence of the primary spectral index are possible <cit.>. However, the main justification for this simple approach, some version of which has been used by many experiments, is that the seasonal variation is itself a ratio in which many uncertainties cancel. The solution of Eq. <ref> for charged pions is Π(E,X)=e^-(X/Λ_π)Z_Nπ/λ_Nϕ_N(E) ∫_0^Xexp [-X'/Λ^*_π ] (X'/ X )^ϵ_π/Ecos(θ) dX', with Λ^*_π = Λ_πΛ_N/(Λ_π-Λ_N). In the high-energy limit, the scaling limit solution of eq:exact, subject to the boundary condition Π (E,0)=0, is Π(E,X) E≫ϵ_π⟶ϕ_N(E, 0) Z_Nπ/ 1-Z_NN Λ_π/Λ_π -Λ_N ( e^-X/Λ_π-e^-X/Λ_N) . In the low energy limit, Π (E,X)E≪ϵ_π⟶Z_Nπ/λ_Nϕ_N(E, 0) e^-X/Λ_N X E cos(θ)/ϵ_π. Accounting for the two-body decay kinematics of π^±→μ ν_μ leads to the muon production spectrum as an integral over the meson fluxes: P_μ(E_μ,X) = ϵ_π/ Xcos(θ)(1-r_π) ∫_E_μ^/r_πΠ(E,X)/ E dE/ E + 0.635 ϵ_K/ Xcos(θ)(1-r_K) ∫_E_μ^/r_KK(E,X)/ E dE/ E. Inserting the low- and high-energy limiting approximations for Π(E,X) and K(E,X) into Eq. <ref> leads to the corresponding expressions for the low- and high-energy muon production spectra in eq:mu-prod-low and eq:mu-prod-high. To check the accuracy of the approximation of eq:low-high, one can expand the exponentials in eq:exact and integrate to obtain Π(E,X)=e^-(X/Λ_π)Z_Nπ/λ_Nϕ_N(E) X ×[1/α_π + 1-(X/Λ^*_π)1/α_π + 2+1/2!(X/Λ^*_π)^21/α_π + 3⋯], where α_π=ϵ_π/(Ecos(θ)). Inserting this expression into Eq. <ref> and defining z = E/E_μ and ξ_π = ϵ_π/(E_μcosθ) then leads to a rapidly converging expression for the muon production spectrum that can be evaluated numerically to compare with the approximation of Eq. <ref>. The series is P_μ,π(E_μ,X) = e^-X/Λ_π/1-r_πZ_Nπ/λ_Nϕ_N(E_μ) ξ_π ×∫_1^1/r_π dz/z^γ+2[1/z+ξ_π-X/Λ_π^*1/2z+ξ_π+1/2!(X/Λ_π^*)^21/3z+ξ_π...]. The constants used in this work are those relevant for from Gaisser:2016uoy, repeated in tab:constants. The non-scaling cross sections and energy-dependent spectral index can be taken into account to first approximation by using energy dependent values for the parameters in the equations. Numerical values, shown in fig:not-constants, were obtained using MCEq and Sibyll 2.3c <cit.>. A comparison between the calculations using constants and energy-dependent parameters is shown in fig:const_vs_Edep. The difference in rate is nearly constant throughout the year, indicating that the energy-independent calculation is a valid approximation to determine the magnitude of the seasonal effect. elsarticle-num
http://arxiv.org/abs/2405.09473v1
20240515160942
Some 1d (Supersymmetric) Quantum Field Theories Reduced from Chern-Simons Gauge Theories
[ "Burak Oğuz", "Bayram Tekin" ]
hep-th
[ "hep-th", "gr-qc", "math-ph", "math.MP" ]
Robust inference of gravitational wave source parameters in the presence of noise transients using normalizing flows Xin Zhang May 20, 2024 ==================================================================================================================== § INTRODUCTION In gauge theories, one encounters complicated tensorial equations of motion which are hard to solve without some symmetry assumptions. One can employ a symmetric ansatz on the fields and reduce the equations of motion to simpler equations on scalar functions possessing certain symmetries, which render the equations easier to solve. Of course, such ansatzë does not exhaust all solutions since the solutions to the equations of motion need not have the symmetry imposed on the ansatz. But it is useful in any case to obtain some class of solutions that obey some symmetries. Such symmetry reduction method at the level of the action was used by Weyl to obtain the spherically symmetric metric solutions to Einstein's field equations. It was shown by Palais <cit.> that the symmetric critical points of the action are critical symmetric points of the field equations if the symmetry group is compact. That means that one can find the equations of motion from the full action, apply the symmetric ansatz, and have a reduced equation that can hopefully be solved; on the other hand, one could equivalently insert the ansatz into the full action, integrate out the irrelevant coordinates using the symmetry, obtain a lower-dimensional theory, and solve the equations of the reduced theory. Palais showed that the two paths give the same equations of motion when the symmetry imposed is associated with a compact group (for example, when spherical symmetry is imposed, the associated group is the special orthogonal group which is compact so Palais' theorem is valid for such a case). Given that the equations of motion of Yang-Mills theory are extremely complicated to solve due to their non-linearity, the trick described above was used to reduce the instanton equations to simpler equations. Witten <cit.> proposed an ansatz that depends only on the Euclidean time t and r = (x_ix_i)^1/2 to reduce the instanton equations. The ansatz is as follows: A_i^a = ε_iakx_k/r^2 (φ_2 - 1) + δ^⊥_ia/rφ_1 + x_ix_a/r^2 A_1, A_0^a = x_a/r A_0, where δ^⊥_ia = ( δ_ia - x_ix_a/r^2), and A_0,A_1,φ_1, & φ_2 depend only on r and t ≡ x_4. Interestingly, when one inserts this ansatz into the Yang-Mills action, the reduced action turns out to be the Abelian-Higgs model on the Poincaré half-plane with the metric g^ab = r^2 δ^ab: S_YM = 1/4 ∫ d^4x F_μν^a F_μν^a A = 8π∫_-∞^∞ dt ∫_0^∞ dr [ 1/2 (D_μφ_i)^2 + 1/8 r^2 F_μν^2 + 1/4 r^-2 (1 - φ_i φ_i)^2 ], where D_μφ_i = ∂_μφ_i + ε_ij A_μφ_j and F_μν = ∂_μ A_ν - ∂_ν A_μ (i=1,2 and μ,ν = 0,1). The instanton equations of YM turn out to be the Bogomolny'i equations of vortices of the 2-dimensional Abelian-Higgs model <cit.>. Geometrically, the same equations describe minimal surfaces in ℝ^1,2 <cit.>. An important observation about Witten's ansatz is that it mixes the space-time indices i with the gauge indices a. Since the gauge group is SU(2), there are three generators so the indices run over the same values. So A_i^a is like a rank 2 tensor in three dimensions and any such tensor can be decomposed into anti-symmetric, traceless symmetric, and trace parts. In the above ansatz, the anti-symmetric part corresponds to (φ_2-1), the symmetric traceless part corresponds to -1/3( δ_ia - 3x_ix_a/r^2) ( φ_1/r + A_1 ), and the trace part corresponds to 2φ_1/r - A_1. For A_0^a, the only choice is a function multiplied by x^a, and that function is chosen as A_0/r. This mixing of space-time & gauge indices can be done for other theories, in particular for 3-dimensional Euclidean theories. There, we don't need any A_0, and the ansatz functions now depend only on r= (x_ix_i)^1/2. Integrating out over the angles, one could reduce the action to the action of a one-dimensional theory. In 3-dimensions, besides the Yang-Mills theory, there is another gauge theory, called the Chern-Simons theory, which was extensively studied as a topological field theory <cit.>, and has important appearances in 3d gauge theories <cit.>, and in condensed-matter systems <cit.>. It is also known that Chern-Simons theory for some gauge groups is related to 3d Einstein gravity with or without a cosmological constant in the first order formalism, as shown by Witten <cit.>, once the invertibility of the dreibein is relaxed. Due to the many interesting phenomena Chern-Simons theory entails, we will study it within the framework of dimensional reduction. We summarize the sections of the paper * Section <ref>: Our starting point is pure SU(2) Chern-Simons theory in the Euclidean formalism. We employ a 3-dimensional version of spherically symmetric ansatz on the gauge field and show that the reduced action is very much like a fermionic theory coupled to a U(1) gauge field with a 1d Chern-Simons kinetic term. A general version of this model was studied in <cit.>. We show that at the level of classical equations of motion, one can map the spherically symmetric solutions of the 3d Chern-Simons theory to the classical solutions of the model studied in <cit.>. * Section <ref>: After our results at the level of classical equations of motion, we turn to the quantum theory and show that the semiclassical (large κ) limit of the quantum Chern-Simons theory is dual to the quantum theory of the reduced action (a 1d Chern-Simons matter theory with a massless complex scalar field having a fermionic kinetic term). This result is very interesting because what this implies is that the large κ limit of the quantum Chern-Simons theory is equivalent to a 1-dimensional QFT based on the action obtained from the dimensional reduction of the 3d Chern-Simons theory. The equivalence is in the following sense: The path integral of the 3d Chern-Simons theory in the saddle point approximation agrees with the saddle point approximation of the 1d reduced theory, in the semiclassical limit. Moreover, we show that this duality must hold for the observables as well. That is, the Wilson loop operators in the quantum Chern-Simons theory should have corresponding objects in the 1d reduced theory. To what do the Wilson loops, which are extended objects, correspond in the 1d reduced theory is an open problem. * Section <ref>: We extend our study to Euclidean Chern-Simons gauge theory with gauge group SL(2,ℂ). This choice of the gauge group is particularly interesting due to its relation to de Sitter gravity in 3 dimensions <cit.>. Our results from SU(2) smoothly extend in a foreseeable manner. The reduced action under a spherical symmetric reduction gives 2 copies of the reduced action obtained in SU(2), and one can view this theory, just as in the SU(2) case, as a fermionic theory (we remind the reader that this association to a fermionic theory is valid only at the level of classical equations of motion, and is not expected to carry over to the quantum theory). One interesting observation is that if one uses the Schwinger gauge in 3 dimensions, x_i 𝒜_i^a=0, this equates the two distinct 1d Chern-Simons connections. The result of this is a single Chern-Simons term in 1-dimension, but now with a factor 2 in front. This 2 is interesting in that it allows, in the dimensionally reduced action, the Chern-Simons level to be a half-integer. Of course, the level is quantized to be an integer in 3 dimensions, and this does not spoil the gauge invariance of the 1d theory, but it is still interesting that fractional Chern-Simons levels are allowed in the reduced theory. * Section <ref>: In this section, we carry out the semiclassical evaluation of the SL(2,ℂ) Chern-Simons theory or the de Sitter gravity. Again, the results in this section extend smoothly from the results of section <ref>, with the semiclassical evaluation of the partition function of dS gravity being the product of two partition functions of a 1d QFT. Again, it is the case that the gauge invariant observables of 3-dimensional theory must reduce to those of the reduced theory. We do not give an account of what those observables might be, it is an open problem. * Section <ref>: Having studied the pure Chern-Simons theories, we now turn our direction to Euclidean Chern-Simons Higgs theories, with interest in the monopole (strictly speaking, instanton) solutions. We employ a spherically symmetric ansatz that has been well studied in the past for the study of monopoles in Chern-Simons Higgs theories <cit.>. One problem we face is that the reduced action from the Higgs sector contains r-dependent terms, which makes the reduced 1d theory depend explicitly on r. To remedy this, we get help from an interesting observation, which was made in <cit.>. The observation is the following: In the first-order formalism of 3-dimensional gravity, the dreibein e_i^a and the spin connection ω_i^a have the same index structure as A_i^a, so one can employ an analogous spherically symmetric ansatz on these fields! This turns out to be a very clean way to go around the problem, as is evident from the results in this section. We will not deal with dynamical gravity so we will not consider the spin connection ω_i^a, but we use the ansatz on the dreibein e_i^a to fix the geometry so that the reduced action does not have explicit r-dependence. From the reduction of the Chern-Simons term, we get a reduced action which can be viewed, at the classical level, as the action of a fermionic theory. From the Higgs sector, one gets a reduced action containing the canonical kinetic term of a real scalar field with a self-interacting potential depending on what one chooses for the potential in the 3d theory, and an interesting interaction arises between fermionic fields and the real scalar which is of the form 1/k𝖷^2 ΨΨ with 𝖷 the scalar field, and Ψ the fermionic field that is associated to the reduced action of the Chern-Simons term. The interesting thing about this interaction is that k is the quantized Chern-Simons level in 3-dimensions. This is an unusual appearance of the Chern-Simons level in the denominator. Moreover, we observe that the interaction term 𝖷^2 ΨΨ signals a supersymmetric interaction, so we modify the Higgs potential in 3-dimensions to add a sixth-order term, and with this modification, we get a supersymmetric quantum mechanics in the reduced action. The conclusion from this section is the following: The spherically symmetric monopole solutions in the Chern-Simons Higgs theory are governed by a supersymmetric quantum mechanics. We further show that the flux of the monopole is proportional to the fermion number operator F =ΨΨ of the supersymmetric quantum mechanics, which in turn is related to the Witten index ℐ = ( (-1)^F e^-β H). * Section <ref>: In this section, we extend our results from the study of SU(2) Chern-Simons Higgs theory. The transition of the results is again smooth and foreseeable. We get two copies of the supersymmetric quantum mechanics we got in section <ref>, and the total flux of the monopoles is now proportional to the sum ∑_i=1^2 Ψ_i Ψ_i. * Section <ref>: Having extensively studied the classical Chern-Simons Higgs theory and its monopoles, we now discuss the quantum theory. Things are much more involved now due to the interactions between the fields. We demonstrate the challenges by studying a 0-dimensional toy model and argue that the semiclassical evaluation of the quantum Chern-Simons Higgs theory is much more complicated than that of pure Chern-Simons theory. We discuss that because the solution space of Chern-Simons Higgs theory does not admit a group structure, our computation in section <ref> cannot be extended for the Chern-Simons matter theory, because our computation in section <ref> relies crucially on the group structure of the solution space of Chern-Simons theory. We in any case sketch a semiclassical argument discussing why the dimensionally reduced action is relevant for the quantum Chern-Simons Higgs theory. The reason is that some subspace of solutions of the Chern-Simons Higgs theory can be recast as the solution space of the reduced action, and in the semiclassical regime, the main contributions come from the classical solutions. This still does not indicate a strong agreement between the path integral of the 3d Chern-Simons Higgs theory and the reduced theory, at least not as strong as an agreement that we find in section <ref>, but it is a crude approximation. § EUCLIDEAN SU(2) CHERN-SIMONS THEORY The Euclidean Chern Simons Theory on a 3-manifold ℳ is defined by the action I = -i κ/4π∫_ℳ CS(A), with CS(A) = ( A∧ dA + 2/3 A ∧ A ∧ A ), where A ∈Ω^1(ℳ) ×𝔰𝔲(2), is the connection of the trivial Principal SU(2) bundle over ℳ. We choose the quadratic bilinear form in the Lie algebra 𝔰𝔲(2) such that (t^at^b) = -1/2δ^ab. In components, the action reads I = iκ/8π∫_ℳ d^3x ε^ijk( A_i^a∂_j A_k^a + 1/3ε^abc A_i^a A_j^b A_k^c ). The Chern-Simons action is not gauge invariant. Under a gauge transformation, the change in the Chern-Simons action is δ I = -2π i κ n + ∫_∂ℳ B, with n = -1/24π^2∫( gdg^-1∧ gdg^-1∧ gdg^-1) ∈ℤ , the winding number of the gauge element g(x) ∈ SU(2), and B is a boundary term which we drop. Although the Chern-Simons action is not gauge invariant, one can get a gauge invariant quantum theory if one ensures that e^-δ I =1 so that the path integral remains the same under a gauge transformation, which is what defines the quantum Chern-Simons theory. The condition for the quantum Chern-Simons theory to be gauge invariant is then -δ I = 2π i m ; m ∈ℤ, from this condition, it follows that κ∈ℤ, so the parameter appearing in front of the action must be an integer to ensure a sensible quantum theory. The equations of motion are given by the critical points of I. Varying with respect to A one gets the flatness condition F_ij^a = 0, which are solved by flat connections A = gdg^-1 for g∈ SU(2). For the component fields A_i^a, we employ the spherically symmetric ansatz that is a 3-dimensional version of the multi-instanton ansatz of <cit.>: A = A_i^a t^a dx^i = t^a [ (φ_2(r) - 1 ) ε_iakx_k/r^2 + φ_1(r) δ^⊥_ia/r + A(r) x_ix_a/r^2] dx^i, where δ^⊥_ia≡δ_ia - x_ix_a/r^2, and r= (x_ix_i)^1/2 (i=1,2,3). With this ansatz, one gets (see appendix for explicit computations): ε^ijk A_i ∂_j A_k^a = 2/r^2( φ_1 φ'_2 - φ'_1 (φ_2-1) + 2A (φ_2-1) ), ε^ijkε_abc A_i^a A_j^b A_k^c = 6/r^2 A ( (φ_2-1)^2 + φ_1^2 ), 1/2ε^ijk F_jk^a = -( ∂_1φ_1 - A φ_2/r^2) ε_iak x_k + ( ∂_1φ_2 + A φ_1/r) δ^⊥_ia - ( 1 - φ_1^2 - φ_2^2/r^4) x_ix_a, and a prime denotes a derivative with respect to r. Therefore the Chern-Simons form is reduced to ε^ijk( A_i^a∂_j A_k^a + 1/3ε^abc A_i^a A_j^b A_k^c ) = 2/r^2( φ_1 φ'_2 - φ'_1 (φ_2-1) + 2A (φ_2-1) + A ( (φ_2-1)^2 + φ_1^2 )) = 2/r^2( φ_1 φ'_2 - φ'_1 (φ_2-1) + A (φ_2^2 + φ_1^2 - 1) ). Since the functions φ_1, φ_2, and A depend only on r, we can integrate over the angles in the action to arrive at the 1-dimensional action S_CS = iκ∫ dr ( φ_1 φ'_2 - φ'_1 (φ_2-1) + A (φ_2^2 + φ_1^2 - 1) ). Dropping the boundary term ∫ dr φ_1', we have: S_CS = i κ∫ dr ( φ_1 φ_2' - φ_1' φ_2 + A (φ_1^2 + φ_2^2 - 1) ). Either from varying this action with respect to φ_1,φ_2,A or by using the form of F_ij^a under the reduction, we have the following equations of motion 0 = ∂_1 φ_1 - A φ_2 , 0 = ∂_1 φ_2 + A φ_1 , 0 = φ_1^2 + φ_2^2 -1 . The action (<ref>) is a first-derivative action. We combine φ_1 and φ_2 into a complex scalar field Y via: Y ≡φ_1 + i φ_2 ; Y≡φ_1 - i φ_2 . Then Y Y' - Y' Y = 2i ( φ_1 φ_2' - φ_1' φ_2), Y Y = φ_1^2 + φ_2^2. Thus, (<ref>) becomes: S_CS = κ∫ dr ( 1/2 (Y Y' - Y' Y) + i A( Y Y -1) ). Integrating by parts Y' Y and dropping the boundary term, the action reads S_CS = κ∫ dr ( Y (∂_r + iA ) Y - i A ). This is very nearly the action of a fermion living in 1-dimension, coupled to a gauge field A whose kinetic term is a 1-dimensional Chern-Simons term. Through rescaling of Y with √(κ), one can write this as S_CS = ∫ dr ( Y (∂_r + i A) Y - iκ A ). Because κ is quantized to be an integer in 3-dimensions (so that e^-I hence the quantum Chern-Simons theory is invariant under large gauge transformations), (<ref>) is invariant under U(1) large gauge transformations. The fields transform as [Under the reduction, there is a surviving U(1) subgroup of SU(2) which has elements g = exp( -i/2 f(r) x̂·σ). It is under these transformations the above action is invariant up to 2π n, n being the winding number of Λ.] Y e^-i Λ(r) Y, A A + ∂_r Λ = e^-iΛ (A - i ∂_r ) e^iΛ. There is a striking resemblance of the kinetic term of Y in (<ref>) to the kinetic term of a fermion ψ living in 1d. To make this similarity precise, we identify τ = r and associate to the complex scalar field Y(r) a Grassmanian valued field ψ(τ) Y(r) ⟷ψ(τ), and write the following Euclidean action S_CS = ∫ dτ( ψ(∂_τ + iA) ψ - i κ A ). Then, what we can say is that the classical solutions of the Chern-Simons theory that possess spherical symmetry can be viewed as the classical solutions of a fermionic theory, because the spherical symmetric configurations of the Chern-Simons action are governed by (<ref>), and the critical points of that action are guaranteed to be the critical points of the Chern-Simons theory that has spherical symmetry, due to Palais' symmetric criticality <cit.>. Moreover, we can map the reduced solutions in terms of Y(r) to solutions in terms of a Grassmanian ψ(τ). According to our dictionary, he equations in terms of Y and ψ are given as ( ∂_r + iA ) Y(r) = 0 = ( ∂_τ + iA ) ψ(τ) , ( ∂_r - iA ) Y(r) = 0 = ( ∂_τ - iA ) ψ(τ) , Y(r) Y(r) - 1 = 0 = ψ(τ) ψ(τ) - 1. The solutions are Y(r) = e^-i ∫^r d r' A(r') Y_0 ; ψ(τ) = e^-i ∫^τ dτ' A(τ') ψ_0 §.§ Some Aspects of the Finite Temperature Quantum Mechanical Model The action in (<ref>) is special in that it resembles its 3-dimensional counterpart, which is a Chern-Simons theory with fermions <cit.>. The quantum theory based on the 1-dimensional action contains interesting phenomena such as the analog of parity anomaly in (2+1)-dimensional fermions coupled to Chern-Simons theory. Moreover, the theory can be solved to all orders in perturbation theory, and the interactions shift the level as <cit.> κκ - 1/2. For the same model with N_f fermions all with mass m, the shift would be κκ - 1/2m/|m| N_f. In the model with N_f fundamental fermions, the exact effective action at finite temperature is computed to be <cit.> Γ[A]_β = N_f log{cos(a/2) - i tanh( β m/2) sin(a/2) } . Where a ≡∫_0^β dτ A(τ). Observe that for (<ref>), we have m=0,N_f=1, & β∞. The effective action is then: Γ[A]_β∞ = -i/2∫_0^∞ dτ A(τ), which just says that the level of the theory (<ref>) is shifted as in (<ref>). It is interesting that for the massless case, even if we don't take the zero temperature limit, the effective action would be almost the same, apart from a shift in the upper integral bound. § DIMENSIONAL REDUCTION AS A QUANTUM DUALITY AT LARGE Κ We will try to establish a relation between the 3-dimensional quantum Chern-Simons theory and the quantum theory of the reduced action (<ref>). Note that the classical solutions of the reduced theory will solve the equations of the 3-dimensional Chern-Simons theory, however, the converse is not true. The solutions of the reduced theory do not exhaust all of the solutions of the 3-dimensional theory, because not all solutions have to be spherically symmetric in 3d. Since some space of the classical solutions of the two agrees, one would expect an approximate equality at the semiclassical regime, which corresponds to the large κ limit of both (<ref>) and (<ref>). It turns out, and we will show this below by explicit computation, that the semiclassical evaluation of both theories agrees exactly up to an irrelevant scaling of the partition function. For a starter, we will study a toy model to recall the stationary phase approximation. Suppose we are interested in the following integral which defines a 0-dimensional QFT: Z = ∫ dx e^-f(x). Let us denote the space of solutions to df/dx = 0 by ℳ. For x_i ∈ℳ, one expands f(x) to second order in the parameter δ x = x-x_i f(x) = f(x_i) + 1/2 f”(x_i) δ x^2, then one can approximate the partition function by summing over all x_i ∈ℳ Z ≈∑_x_i ∈ℳ e^-f(x_i)∫_B(x_i) d[δ x] e^-1/2 f”(x_i) δ x^2, with B(x_i) being some neighborhood of the classical point x_i. If B(x_i) contains the whole of the real line, then one can perform the Gaussian integral, but the expansion to the second order only holds for small enough δ x. One can avoid this pitfall by considering the limit in which a parameter appearing in the classical action f(x) to be very large so that the Gaussian is sharply peaked around the minima and quickly decays hence the integral around B(x_i) is reasonably close to the integral over the entire real line. In that limit, the semiclassical limit, one has Z ≈∑_x_i ∈ℳ e^-f(x_i)( f”(x_i) )^-1/2. Now, we will extend this to the quantum Chern-Simons theory, a 3-dimensional QFT. The Euclidean SU(2) Quantum Chern-Simons Theory over a 3-manifold ℳ is defined by the path integral Z = ∫𝒟 A exp( -I[A] ) = ∫𝒟 A exp( iκ/4π∫_ℳ CS(A) ), with CS(A) given as in (<ref>), and A ∈Ω^1(ℳ) ×𝔰𝔲(2). As discussed in the previous section, the quantum theory is gauge invariant provided κ∈ℤ. To carry out the semiclassical approximation, we first need to determine the space of stationary points of I, the Chern-Simons action. The stationary points are given by the flatness condition: F_ij^a = 0, which are solved by pure gauges A(x) = g(x) d g^-1(x) with g(x) ∈ SU(2). Hence, the solution space of the Chern-Simons theory is given by ℳ_CS = {A(x) = g(x) d g^-1(x) | g∈ SU(2)}. Which is isomorphic to the space of gauge transformations 𝒢. The isomorphy is established by the pure gauge connections: 𝒜 : 𝒢 ↦ℳ_CS g(x) ↦𝒜 = g(x) d g^-1(x). If one expands I around a classical point 𝒜∈ℳ_CS to second order, one gets Z ≈∫_ℳ_CS𝒟𝒜 e^-I[𝒜]( I”[𝒜] )^-1/2. Here, we need to ensure that I”[𝒜] is a sharply peaked "Gaussian" in the field space so that the integral, analogous to the one in (<ref>), is reasonably close to the exact result. This is achieved in the large κ limit because then I”[𝒜] is sharply peaked around the classical solution 𝒜. The stationary phase approximation of the SU(2) Chern-Simons theory was carried out in <cit.>, and it was shown that the partition function is, as expected, a topological invariant of the 3-manifold over which it is defined. However, our motivation to study the semiclassical regime is different than that of <cit.>: We are not interested in explicitly evaluating the functional determinant appearing in (<ref>), what we hope to accomplish is to determine the precise relation between the large κ limit of the 3-dimensional quantum Chern-Simons path integral with the large κ limit of the path integral based on the action (<ref>). Let us return to (<ref>). The functional determinant is given by ( I”[𝒜] ) = ∫𝒟ωexp( iκ/4π∫ (ω∧ d_𝒜ω ) ), where ω is an adjoint valued 1-form and d_𝒜ω = d ω + [𝒜, ω ] is the gauge covariant derivative of ω with respect to the flat background 𝒜. Since ω is adjoint-valued, the functional integral on the right, hence the determinant of I”[𝒜] evaluated at a classical configuration 𝒜, are gauge invariant. The gauge invariance of the determinant is important for our purposes, our purpose being the establishment of the duality between a theory in 3d with its 1d reduced version, at large κ. Now, we will use to our advantage the fact that the integral (<ref>) is restricted to the solution space ℳ_CS. That means 𝒜 = g(x) d g^-1(x) for some g(x) ∈ SU(2), and hence -I[𝒜] = iκ/4π∫_ℳ CS(𝒜) = 2π i κ n, with n given as in (<ref>). So, the value of I depends only on the topological sector in which 𝒜 lives. In fact, e^-I[𝒜] =1 due to the quantization of κ. Moreover, the determinant of I”[𝒜] is also gauge invariant, so that depends only on the topological sector in which 𝒜 lives as well. Therefore, one can choose a representative flat connection in each sector, call it 𝒜_n ∈ℳ_CS^n, and one can write the full integral as a discrete sum over n and an integral in each sector ℳ_CS^n. The gauge invariant functionals in each sector can be taken out of the integral over ℳ_CS^n, that is to say Z ≈∫_ℳ_CS𝒟𝒜 e^-I[𝒜]( I”[𝒜] )^-1/2 = ∑_n = -∞^∞ e^-I[𝒜_n]( I”[𝒜_n] )^-1/2∫_ℳ_CS^n 𝒟𝒜_n = ∑_n = -∞^∞ e^-I[𝒜_n]( I”[𝒜_n] )^-1/2vol(ℳ_CS^n ). Now, we get back to the point where we've chosen a representative flat connection in each sector ℳ_CS^n. For each n, we can choose the representative gauge field 𝒜_n ∈ℳ_CS^n such that it depends only on the coordinate r = (x_ix_i)^1/2. Denoting ℳ_r to be the space of flat connections that depends only on r and the subspace ℳ_r^n of ℳ_r to be those flat connections that have winding number n, we can recast the sum in (<ref>) as Z ≈∑_n=-∞^∞ e^-I[𝒜_n(r)]( I”[𝒜_n(r)] )^-1/2vol(ℳ_CS^n ), with 𝒜_n(r) ∈ℳ_r^n for each n. We will insert 1 into this sum in the following way Z ≈∑_n=-∞^∞ e^-I[𝒜_n(r)]( I”[𝒜_n(r)] )^-1/2vol(ℳ_CS^n ) 1/vol(ℳ_r^n )∫_ℳ_r^n𝒟𝒜_n(r). Now, one can take the exponential and the determinant inside the integral over ℳ_r^n, because they are gauge invariant and hence taking them into the integral will not affect the result: Z ≈∑_n=-∞^∞vol(ℳ_CS^n )/vol(ℳ_r^n )∫_ℳ_r^n𝒟𝒜_n(r) e^-I[𝒜_n(r)]( I”[𝒜_n(r)] )^-1/2. If the volume of the solution space of flat connections ℳ_CS^n, and that of the solution space of flat connections possessing spherical symmetry ℳ_r^n are independent of n, then one can take them out of the sum over n. Let us assume, for the moment, that this is the case, and take them out of the sum. We will show later that they are indeed independent of n. After taking them out of the sum, they can be fed into the normalization of the path integral, so we can drop them. Hence, one has Z ≈∑_n=-∞^∞∫_ℳ_r^n𝒟𝒜_n(r) e^-I[𝒜_n(r)]( I”[𝒜_n(r)] )^-1/2 ≈∫_ℳ_r𝒟𝒜(r) e^-I[𝒜(r)]( I”[𝒜(r)] )^-1/2, where we combined the integral over the n-th sector and the discrete sum into a single integral over the solution space of spherically symmetric flat connections ℳ_r. Now we argue that the right-hand side is the semiclassical evaluation of the path integral of the quantum Chern-Simons theory restricted to the spherically symmetric domain, that is Z_spherical≡∫_spherical𝒟 A(r) e^-I[A(r)]≈∫_ℳ_r𝒟𝒜(r) e^-I[𝒜(r)]( I”[𝒜(r)] )^-1/2, where A(r) is a gauge field configuration that depends only on the r-coordinate but is not necessarily a solution to the equations of motion. On the other hand 𝒜(r) is a spherically symmetric flat gauge field, so it is in ℳ_r. For the restricted path integral, the full solution space is given by ℳ_r, because the main integral is restricted only to spherically symmetric configurations. Thus it is easy to see that the semiclassical evaluation is really the integral over the solution space ℳ_r. The right-hand side, by (<ref>), is equal to the semiclassical evaluation of the ordinary quantum Chern-Simons theory, without a restriction to the spherically symmetric configurations. Therefore, we conclude that Z ≅ Z_spherical, where ≅ means that the semiclassical (large κ) limit of the path integral on the left-hand side, which is a 3-dimensional QFT, is exactly the same as the semiclassical limit of the path integral on the right-hand side, which we will argue is a 1-dimensional QFT. We can generalize our argument for the path integral of any gauge invariant functional ℱ[𝒜] over the moduli space of the Chern-Simons theory. One has ∫_ℳ_CS𝒟𝒜ℱ[𝒜] = ∑_n=-∞^∞ℱ[𝒜_n] ∫_ℳ_CS^n𝒟𝒜_n = ∑_n = -∞^∞ℱ[𝒜_n] vol(ℳ_CS^n) ∫_ℳ_r𝒟𝒜(r) ℱ[𝒜(r)] = ∑_n=-∞^∞ℱ[𝒜_n(r)] ∫_ℳ_r ^n𝒟𝒜_n(r) = ∑_n = -∞^∞ℱ[𝒜_n(r)] vol(ℳ_r^n). In showing that the path integral of ℱ over ℳ_CS is the same, up to normalization, as the path integral of ℱ over ℳ_r, we need to assume that the volumes of the respective solution spaces are independent of n so that one can take them out of the discrete sum. This is essentially the same argument that we carried out for the special case ℱ[𝒜] = e^-I[𝒜]( I”[𝒜] )^-1/2. Consequently, we can show that the two lines give the same results apart from a constant scaling, for a general gauge invariant functional ℱ[𝒜]. 1/vol(ℳ_CS) ∫_ℳ_CS𝒟𝒜ℱ[𝒜] = ∑_n=-∞^∞ℱ[𝒜_n] = 1/vol(ℳ_r )∫_ℳ_r 𝒟𝒜ℱ[𝒜]. That this equality holds for an arbitrary gauge invariant functional implies, in particular, that the observables of the 3-dimensional theory must reduce, in the large κ limit, to the observables of the 1-dimensional theory, because the observables are gauge invariant. In Chern-Simons gauge theory, the only gauge invariant observables are the Wilson loop operators, defined by W_R(C) ≡_R P exp( ∮_C A ), with _R the trace in the representation R, P is the path ordering operator, and C is a curve over which the extended operator is defined. The correlation function of some product of Wilson lines is defined as ⟨∏_i W_R_i(C_i) ⟩ = ∫𝒟 A e^-I[A]∏_i W_R_i(C_i). Since W_R(C) is gauge invariant, in the large κ limit we expect these correlation functions to reduce to the correlation functions of the 1-dimensional theory. Finding the corresponding observable to the Wilson operators, in the reduced theory, is an open problem. The conclusion is that the semiclassical evaluation of the quantum Chern-Simons theory can be dealt with by considering only the classical solutions that possess spherical symmetry. The path integral and the observables of both theories agree exactly in the semiclassical regime. The most general form of the gauge connection that possesses spherical symmetry is given by the three-dimensional version of the multi-instanton ansatz of Witten <cit.>, which we employed in the previous sections. Moreover, the reduced action under the spherically symmetric ansatz reads S_CS = κ∫ dr ( Y (∂_r + i A) Y - iA ). And it follows from (<ref>) that ∫𝒟 A exp( iκ/4π∫ CS(A) ) ≅∫𝒟 A 𝒟 Y 𝒟Yexp( - κ∫ dr ( Y (∂_r + i A) Y - iA ) ), and, by equality, we mean that the large κ limit of both sides agrees exactly. Now we take up the issue of showing that the volume of either ℳ_CS^n or ℳ_r^n is independent of n. We argue as follows: Take n=1 and let 𝒜_1 ∈ℳ_CS^1. Homotopically, all other elements in ℳ_CS^1 can be generated by the orbit of 𝒜_1 under the action of small gauge transformations, living in ℳ_CS^0 [To be more precise, the group of small gauge transformations is 𝒢_0, and it can be mapped onto ℳ_CS^0 by g_0 ↦ g_0 d g_0^-1. Since the two spaces are isomorphic, we will abuse terminology and say ℳ_CS^0 when we actually mean 𝒢_0 for small gauge transformations.]. Take 𝒜_1 = g_1 d g_1^-1 so one can write ℳ_CS^1 as ℳ_CS^1 = {𝒜 = g'_1 d (g'_1)^-1 | g'_1 = g_0 g_1 & g_0 ∈ℳ_CS^0 }. Now, consider ℳ_CS^n with n≠ 1. An element in ℳ_CS^n can be taken, for example, as g_n = (g_1)^n [For negative n, one needs to choose a gauge function g_-1 with winding number -1 and take the n-th power (g_-1)^n which has winding number -n.] , and all the other elements can be generated by the orbit of g_n under the action of ℳ_CS^0. That is to say ℳ_CS^n = {𝒜 = g'_n d (g'_n)^-1 | g'_n = g_0 g_n & g_0 ∈ℳ_CS^0 }. Since both ℳ_CS^1 and ℳ_CS^n are generated by the action of the same set of gauge transformations, we conclude that vol(ℳ_CS^n) is the same for all n (note that this argument relies on the group structure of ℳ_CS^0). The same argument applies for ℳ_r^n because, for all n, the space ℳ_r^n will be generated by the orbit of the same group of transformations ℳ_r^0. Let us summarize the results of this section in a table In a forthcoming paper, we shall study the semiclassical theory in the gravitational setting in more detail. § SL(2,C) CHERN-SIMONS THEORY OR DE SITTER GRAVITY For SL(2,ℂ) Chern-Simons gauge theory, the connection is a doublet of the form (𝒜, 𝒜 ) = (𝒜^a t^a, 𝒜̅^a t^a). The action is given by (we are using conventions from <cit.>) : I = t/4π∫_ℳ( 𝒜∧ d 𝒜 + 2/3𝒜∧𝒜∧𝒜) - t/4π∫_ℳ( 𝒜∧ d 𝒜 + 2/3𝒜∧𝒜∧𝒜). This action is equivalent to the action of de Sitter Gravity in the first-order formalism if one makes the identification: κ = 1/4G_N ; 𝒜^a = ω^a + i e^a ; 𝒜^a = ω^a - i e^a , where e_i^a is the dreibein and ω_i^a is the spin connection. The equations of motion are ℱ_ij^a = 0 = ℱ_ij^a, with ℱ = d_𝒜𝒜 ; ℱ = d_𝒜𝒜, with the covariant derivative d_𝒜𝒜 = d 𝒜 + 𝒜∧𝒜 ; d_𝒜𝒜 = d 𝒜 + 𝒜∧𝒜 On the dS_3 gravity side, the equations of motion give Einstein's equations with a positive cosmological constant (with the radius of de Sitter space taken to be 1) in the first-order formalism. In this theory, let us employ an extension of the spherically symmetric ansatz 𝒜 = t^a [ ε_iakx_k/r^2 (φ_2 - 1) + δ^⊥_ia/rφ_1 + x_ix_a/r^2 A ] dx^i, 𝒜 = t^a [ ε_iakx_k/r^2 (φ_2 - 1) + δ^⊥_ia/rφ_1 - x_ix_a/r^2A] dx^i. We will set t = iκ and t = -iκ. After reducing the action, and dropping the boundary terms ∫ dr ∂_rφ_1 and ∫ dr ∂_rφ_1, we get: S = i κ∫ dr ( φ_1 φ_2' - φ_1' φ_2 + A (φ_2^2 + φ_1^2 - 1) ) - i κ∫ dr ( φ_1 φ_2' - φ_1' φ_2 - A (φ_2^2 + φ_1^2 - 1) ). Define the complex scalar fields Y_1 = φ_1 + i φ_2 ; Y_2 = φ_1 - i φ_2, Y_1 = φ_1 - i φ_2 ; Y_2 = φ_1 + i φ_2, which can be used to recast the reduced action as S = κ∫ dr ( 1/2 (Y_1 Y'_1 - Y'_1 Y_1) + A (Y_1 Y_1 -1) ) + κ∫ dr ( 1/2 (Y_2 Y_2' - Y'_2 Y_2) + A (Y_2 Y_2 -1) ) = κ∫ dr ( Y_1 (∂_r + iA) Y_1 - i A ) + κ∫ dr ( Y_2 (∂_r + iA) Y_2 - i A) . With the complex scalar fields Y_1,Y_2, we associate Grassmanian valued fields ψ_1,ψ_2 Y_1(r), Y_2(r) ⟷ψ_1(τ),ψ_2(τ) , after which we have S = ∫ dτ( ψ_1 ( ∂_τ + i A ) ψ_1 - iκ A ) + ∫ dτ( ψ_2 ( ∂_τ + i A ) ψ_2 - iκA), where κ is absorbed in the definitions of the Grassmanian fields.. What we see is that through dimensional reduction of 3-dimensional dS-gravity, we get two copies of a quantum mechanical model of fermions coupled to gauge connections A and A. In the 3-dimensional theory, we can use the Schwinger gauge x_i 𝒜_i^a = x^a (A - A) = 0, with which the action can be written as S = ∫ dτ( ∑_i = 1^N_f=2ψ_i (∂_τ + iA) ψ_i - 2 iκ A ). So, in the gauge theory formalism of dS_3 gravity, spherically symmetric configurations are governed by a first-order action that is very similar to a fermionic action in Euclidean formalism. But now, the coefficient of A is 2κ instead of κ. This was unexpected a priori because now the large-gauge invariance of the corresponding quantum theory requires κ to be a half-integer, as opposed to being an integer. But in 3-dimensions, κ is quantized to be an integer. Restricting κ to an integer does not break in any way the large gauge invariance of (<ref>), but it is interesting that fractional Chern-Simons levels are allowed in the reduced theory. Another way to view the situation arising from the factor 2 in front of the A term is to scale A A/2. The fundamental charge of the fermionic fields is halved, because the covariant kinetic term becomes ψ(∂_τ + i/2 A ) ψ, which means that each of the fermions is charged with 1/2 under the local U(1) gauge symmetry. That is to say, under the gauge transformation ψ' = e^i/2Λψ, A' = A + ∂_τΛ, the action changes by -2π i κ n, where n = 1/2π∫ dτΛ is the winding number of the large-gauge transformation, and now κ is not fractional. One other place where this reduction may be potentially useful is in understanding the relation between dS_3 gravity and SYK models <cit.>. § LARGE Κ LIMIT OF SL(2,C) QUANTUM CHERN-SIMONS THEORY Let us study the large κ limit <cit.>. We will draw conclusions from section <ref>. The path integral is given as Z = ∫𝒟𝒜𝒟𝒜exp{iκ/4π∫_ℳ CS(𝒜) - iκ/4π∫_ℳ CS(𝒜) }, with CS(𝒜) = ( 𝒜∧ d 𝒜 + 2/3𝒜∧𝒜∧𝒜), CS(𝒜) = ( 𝒜∧ d 𝒜 + 2/3𝒜∧𝒜∧𝒜). The integrals decompose and the semiclassical evaluation gives Z ≈ ∑_n= -∞^∞( ∫𝒟ωexp( iκ/4π∫_ℳ ( ω∧ d_𝒜_nω ) ) )^-1/2 ∑_m= -∞^∞( ∫𝒟ωexp( -iκ/4π∫_ℳ ( ω∧ d_𝒜_mω) ) )^-1/2, where 𝒜_n and 𝒜_m are flat connections so that ℱ = d_𝒜_n𝒜_n = 0 ; ℱ = d_𝒜_m𝒜_m = 0, and their winding numbers are n and m, respectively. The same argument that was carried out for SU(2) Quantum Chern-Simons theory at large κ can be carried out here to show that: Z ≅ Z_spherical×Z_spherical, with Z_spherical = ∫𝒟 A 𝒟 Y_1 𝒟Y_1 exp( -κ∫ dr ( Y_1 (∂_r + iA) Y_1 - iA ) ) , Z_spherical = ∫𝒟A𝒟 Y_2 𝒟Y_2 exp( -κ∫ dr ( Y_2 (∂_r + iA) Y_2 - iA) ). Of course, Z can be decomposed into two parts to begin with, since 𝒜 and 𝒜 do not mix with each other. In that sense, the result we got form SL(2,ℂ) Chern-Simons theory is like the doublet of the result we got from SU(2) Chern-Simons theory. § DIMENSIONAL REDUCTION OF EUCLIDEAN SU(2) CHERN-SIMONS HIGGS THEORY In Euclidean space, we consider the action S = iκ/8π g_YM^2∫ d^3x ε^ijk( A_i^a ∂_j A_k^a + ε^abc/3 A_i^a A_j^b A_k^c ) + 1/g_YM^2∫ d^3x ( 1/2 D_i Φ^a D_i Φ^a + V(Φ^a) ), where Φ^a is in the adjoint representation of SU(2), and we have D_i Φ^a = ∂_i Φ^a + ε^abc A_i^b Φ^c. We take the potential as the super renormalizable Higgs potential [We will take the renormalizable (Φ^aΦ^a)^3 term into consideration in section <ref>. ] V(Φ^a) = λ/4 ( Φ^a Φ^a - v^2)^2. The field equations are 0 = δ S/δΦ^a⟹ 0= - D_i D_i Φ^a + ∂ V/∂Φ^a, 0 = δ S/δ A_i^a⟹ 0 = iκ/4π g_YM^2ε_ijk F_jk^a + 1/g_YM^2ε^abcΦ^b D_i Φ^c. Under a gauge transformation, the Higgs term is invariant. The Chern-Simons term changes as 1/g_YM^2δ I = -2π i κ/g_YM^2 n ; n∈ℤ, n being the winding number of the gauge transformation, given by (<ref>). For the corresponding quantum theory to be gauge invariant, we must impose the condition k ≡κ/g_YM^2∈ℤ. Are there monopole solutions in the theory defined by (<ref>)? We know that if the kinetic term were the 3d Yang-Mills term, then the corresponding Yang-Mills-Higgs action could be considered as the energy functional of a 3+1 dimensional Georgi-Glashow model which admits monopoles that are called 't Hooft-Polyakov monopoles. On another route, one could take the 3d Yang-Mills Higgs action to be a 3+0 dimensional Euclidean theory, where the solutions would be now 't Hooft-Polyakov instantons, strictly speaking. It is well known in the literature of topological solitons <cit.> that here are finite energy static monopole solutions in 3+1 dimensions (or finite Euclidean action instanton solutions in 3+0 dimensions), with a corresponding Bogomolny'i bound on the energy (the Euclidean action). In the limit where the Higgs potential vanishes, called the BPS limit, one can saturate the bound if the fields solve the BPS equations 1/2ε_ijk F_jk^a = D_i Φ^a, and these solutions are called BPS monopoles (instantons). The energy (Euclidean action) of the monopoles (instantons) for the BPS solutions is proportional to the topological charge of the soliton, and inversely proportional to the coupling constant g_YM, a characteristic scenario in topological solitons. Having given a brief review of what would have happened if the gauge sector had a Yang-Mills term, we now get back to our situation. The equations of motion of the Chern-Simons Higgs theory from the variation of A_i^a can be written as <cit.> B_i^a Φ^a = 0, with B_i^a = 1/2ε_ijk F_jk^a. This equation says that the magnetic field is orthogonal, in the sense of the quadratic form on the Lie algebra denoted , to the Higgs field everywhere. If we consider 't Hooft's definition of the Abelian field strength in the direction of symmetry-breaking within the context of Chern-Simons Higgs theory ℱ_ij = ℱ_ij^(1) + ℱ_ij^(2), with ℱ_ij^(1) = Φ^a/(Φ^b Φ^b)^1/2 F_ij^a, ℱ_ij^(2) = - ε^abcΦ^a/(Φ^a Φ^a)^3/2 D_i Φ^b D_j Φ^c, we see that from the equations of motion we have ℱ^(1) = 0 for any solution. Since the magnetic flux of the monopole [In the context of Chern-Simons Higgs theory, we will abuse terminology and say monopoles when we actually mean instantons.] is given by the surface integral of ℬ_i = 1/2ε_ijkℱ_jk on a sphere at infinity, and since ℬ_i^(1)≡1/2ε_ijkℱ_jk^(1) is 0 everywhere, the only way to get a monopole with non-trivial magnetic flux (hence, nontrivial topological charge) is to make sure that ℬ_i^(2)≡1/2ε_ijkℱ_jk^(2) is non-vanishing at the boundary so that the surface integral of ℬ = ℬ^(1) + ℬ^(2) is nonzero. But, if ℬ^(2) is to be non-vanishing at infinity, then D_i Φ^a must be non-vanishing at infinity. Observe that the Euclidean action of the monopole solution goes like S_E ∼∫ d^3x (D_i Φ^a)^2 + ⋯, at large |x|. But this action cannot converge if we want to construct a monopole solution with non-trivial flux at infinity, because for such a solution we necessarily have D_i Φ^a O(1/r), which means S_E ∼ 4π∫ dr r^2 O(1/r^2) + ⋯∞. Which means that S_E diverges linearly in r. So, it appears that the finiteness of Euclidean action and the requirement of a non-trivial flux at infinity are in clash with each other, one being necessarily absent in the presence of the other condition. Although we have shown that for Chern-Simons Higgs theories one cannot have finite action monopoles with non-trivial topological charge, let us proceed in any case to construct some solutions with non-trivial charge. For a monopole-instanton type of solution, we employ the spherically symmetric ansatzë that is well studied in the literature (see <cit.> and the references therein) A^a = [ ε_iakx_k/r^2( φ_2(r) - 1 ) + δ^⊥_ia/rφ_1(r) + x_ix_a/r^2 A(r) ] dx^i, Φ^a = x^a/rΦ(r). The action reduces, upon integrating over the angles, to: S = iκ/g_YM^2∫ dr (φ_1 φ_2' - φ_1' φ_2 + A (φ_1^2 + φ_2^2 -1 ) ) + 4π/g_YM^2∫ dr ( r^2/2Φ'^2 + Φ^2 ( φ_1^2 + φ_2^2) ) + ∫ dr V , with ∫ dr V = πλ/g_YM^2∫ dr r^2 (Φ^2- v^2)^2 . For the moment, we will ignore the potential term. Converting the φ's into complex scalars Y as in (<ref>), one ends up with the action S = κ/g_YM^2∫ dr ( Y (∂_r + iA) Y - i A ) + 4π/g_YM^2∫ dr ( r^2/2Φ'^2 + Φ^2 Y Y ) + ∫ dr V. Because the kinetic term of Y is that of a fermionic field, we associate with the complex scalars Y the Grassmanian fields ψ and write [At the level of the classical equations of motion, this is no problem. One can pair the classical solutions of the two actions in a well-defined one-to-one manner.] S = 1/g_YM^2∫ dr ( 1/2 4π r^2 Φ'^2 + κψ(∂_r + iA) ψ + 4πΦ^2 ψψ - iκ A ) + ∫ dr V. If it were not for the r^2 factor in front of Φ'^2 term, this would be the action of a single scalar field in 1-dimension coupled to a Grassmanian valued field, and the U(1) symmetry of the fermionic action is gauged with a Chern-Simons kinetic term. Since Φ is real, it cannot couple to the U(1) gauge field A. Moreover, this is clearly in the Euclidean formalism because the fermionic kinetic term has no i in front of it. With the identification τ = r, and scaling the fields Ψ(τ) = (√(κ)/g_YM) ψ(τ), and 𝖷(τ) = (2√(π)/g_YM) Φ(τ) we can rewrite (<ref>) as S = ∫ dτ( τ^2/2𝖷̇^2 + Ψ (∂_τ + i A) Ψ - iκ/g_YM^2 A + g_YM^2/κ𝖷^2 ΨΨ) + ∫ dτ V(𝖷), with a 1-dimensional time-dependent Higgs potential of which the minima are shifted ∫ dτ V(𝖷) = λ g_YM^2/16π∫ dττ^2 ( 𝖷^2 - 4π v^2/g_YM^2)^2. Note that the strength of the interactions between 𝖷 and Ψ is controlled by the coupling constant g_YM^2/κ. This coupling constant cannot take arbitrary values because in 3d, κ/g_YM^2 is quantized. At this point, there are two questions one may ask about the action in (<ref>): * Let us forget about the 3-dimensional origin, and start with (<ref>), but without the explicit time dependencies arising in the action. If we want the corresponding quantum theory to be invariant under large gauge transformations of Ψ, A; we must ensure that k ≡κ/g_YM^2 is an integer, just as κ had to be an integer in 3d Chern-Simons theory to have a consistent quantum theory. But, the inverse of this parameter controls the strength of the interaction between 𝖷 and Ψ. That is, the interaction term is 1/k𝖷^2 ΨΨ. It would be interesting to study the effects of this interaction in the corresponding quantum theory. * The interaction between 𝖷 and Ψ indicates a possible supersymmetry in this model. To have a genuine supersymmetry, one would need to add a potential of the form 1/2 h'(𝖷)^2 ∼𝖷^6 so that the interaction terms can be written as 1/2 h'(𝖷)^2 + h”(𝖷) ΨΨ with h being the superpotential <cit.>. With the canonical kinetic terms of the bosonic and fermionic fields, this gives a supersymmetric quantum mechanics in the Euclidean formalism. It would be interesting to study soliton solutions of the 3-dimensional Chern-Simons Higgs theory with an additional sixth-order potential, whose spherically symmetric solitons would be described by a supersymmetric quantum mechanics. §.§ Chern-Simons Higgs Theory on a Curved Space To remove the time dependence in the kinetic term of 𝖷 in (<ref>), we define the theory on a manifold ℳ which has spherical symmetry, and whose dreibein is assumed to be given as e^a = [ -ε^a_ikx^k/r^2 e_1(r) + δ^⊥ a_i/r e_2(r) + x_ix^a/r^2 e_3(r) ] dx^i. With this ansatz, we are exploiting the structural similarity of the SU(2) gauge field and the dreibein in 3-dimensions. Now, observe that x^a ≡ e_i^a x^i = e_3 x^a, so we fix e_3 =1. From the dreibein, one finds the metric as (see the appendix for explicit calculations) g_ij = e_i^a e_j^b δ_ab = 1/r^2 ( (e_1^2+e_2^2) δ^⊥_ij - x_ix_j) = 1/r^2( (e_1^2+e_2^2) δ_ij - (1 + (e_1^2+e_2^2)/r^2) x_ix_j ). We then consider the action S = iκ/8π g_YM^2∫_ℳ d^3x ε^ijk( A_i^a ∂_j A_k^a + ε^abc/3 A_i^a A_j^b A_k^c ) + 1/ g_YM^2 ∫_ℳ d^3x √(|g|)( 1/2 g^ij D_i Φ^a D_j Φ^a + V(Φ) ). The Chern-Simons term is topological, so it is not affected by the change in the geometry. Hence, it will reduce to the same action. To reduce the Higgs action, we need the determinant of g_ij and compute g^ij D_i Φ^a D_j Φ^a. One finds √(|g|) = |e| = 1/r^2 (e_1^2+e_2^2), g^ij D_i Φ^a D_j Φ^a = Φ'^2 + 2/e_1^2+e_2^2 (φ_1^2 + φ_2^2) Φ^2. So the Higgs action reduces to S_H = 1/g_YM^2 ∫ d^3x |e| ( g^ij1/2 D_i Φ^a D_j Φ^a + V(Φ) ) = 4π/g_YM^2∫ dr ( (e_1^2+e_2^2)/2Φ'^2 + (φ_1^2 + φ_2^2) Φ^2 + λ (e_1^2+e_2^2)/4 (Φ^2 - v^2)^2). Therefore, the total action can be written as S = iκ/g_YM^2∫ dr ( φ_1 φ_2' - φ_1' φ_2 + A (φ_1^2 + φ_2^2 - 1) ) + 4π/g_YM^2∫ dr ( (e_1^2+e_2^2)/2Φ'^2 + (φ_1^2 + φ_2^2) Φ^2 + λ (e_1^2+e_2^2)/4 (Φ^2 - v^2)^2). So, through the choice of the geometry over which the theory is defined, we have eliminated the r^2 factor in front of the kinetic term. Indeed, the flat space limit corresponds to (e_1^2+e_2^2) = r^2 so we recover (<ref>). If one adds the Einstein-Hilbert term into the action, making gravity to be dynamical, then we would also need a spin connection ω which we could take to be in the same form as the gauge field ansatz, then the term (e_1^2+e_2^2) Φ'^2 is a nonlinear interaction between the Higgs field and the gravity. We will not discuss this in this paper. Let us fix e_1^2+e_2^2 = r_0^2 for a constant r_0^2 so that in front of the kinetic term we just have a constant. The metric now becomes: g_ij = 1/r^2( r_0^2 δ_ij + ( 1 - r_0^2/r^2) x_ix_j ), and the line element reads as ds^2 = dr^2 + r_0^2 dΩ^2. So the resulting space has the geometry of ℝ× S^2 with S^2 having fixed radius r_0. It makes sense that this choice of geometry removes the r^2 term in front of Φ'^2 because that factor resulted from the angular integral which gives 4π r^2; but if we fix the geometry such that all spheres have the same surface area, 4π r_0^2, then the reduced action now comes with r_0^2, not r^2. Defining Y = φ_1 + i φ_2, Y = φ_1 - i φ_2, the action can be rewritten as S = κ/g_YM^2∫ dr ( Y (∂_r + iA) Y - i A ) + 4π/g_YM^2∫ dr ( r_0^2/2Φ'^2 + Φ^2 Y Y + λ r_0^2/4 (Φ^2 -v^2)^2 ). Associating Y with a Grassmanian field ψ one gets S = κ/g_YM^2∫ dr ( ψ(∂_r + iA) ψ - i A ) + 4π/g_YM^2∫ dr ( r_0^2/2Φ'^2 + Φ^2 ψψ + λ r_0^2/4 (Φ^2 -v^2)^2 ), and scaling the fields by Ψ = √(κ)/g_YMψ, 𝖷 = 2√(π)r_0/g_YMΦ, S = ∫ dτ( Ψ(∂_τ + i A) Ψ - iκ/g_YM^2 A ) + ∫ dτ( 1/2𝖷̇^2 + g_YM^2/r_0^2κ𝖷^2 ΨΨ + g_YM^2 λ/16π r_0^2( 𝖷^2 - 4π r_0^2 v^2/g_YM^2)^2 ). The expectation value for the Higgs field of the 3-dimensional theory is v whereas that of the 1-dimensional reduced theory is 4π r_0^2/g_YM^2 v, so it is scaled up. Note that the fermionic sector of this theory has a U(1) gauge symmetry, and for the quantum theory of this model to exhibit invariance under large-gauge transformations, we must have e^i2π n κ / g_YM^2 =1 ⟹ k ≡κ/g_YM^2∈ℤ, where n is the winding number of the large gauge transformation. In particular, as we noted earlier in this section, the interaction between 𝖷 and Ψ is controlled by the coupling constant 1/r_0^2 k for an integer k. This is an unusual interaction term, involving the Chern-Simons level in the denominator. Another observation one can make is that with a double-well potential, the theory is expected to have the usual quantum mechanical instanton solutions. So the quantum version of this theory seems very rich. Getting back to monopoles, we recall (<ref>) ℱ_ij^(2) = -ε^abcΦ^a/(Φ^a Φ^a)^3/2 D_i Φ^b D_j Φ^c. By the equations of motion, we will have ℱ_ij^(1)=0. Inserting our ansatz into this expression, we get (see the appendix) ℱ_ij^(2) = -ε_ijkx^k/r^2 (φ_1^2+φ_2^2), the magnetic field is thus ℬ_i^(2) = - x_i/r^2 (φ_1^2+φ_2^2). The flux integral at ∞ gives Φ_flux = -4π(φ_1^2+φ_2^2) = -4πY Y. So, Y(r) Y(r) is associated with an r-dependent magnetic charge of the monopole solution. In terms of the fermionic field Ψ, we have Φ_flux = -4π g_YM^2 /κΨΨ = - 4π/kΨΨ. §.§ Dimensional Reduction of the Chern-Simons Higgs Theory on M as a Supersymmetric Quantum Mechanics We get back to discussing the second question we posed above, which is, can we have supersymmetry in (<ref>)? In the Euclidean formalism, the action of a supersymmetric quantum mechanics with superpotential h(𝖷) reads <cit.>: S_E = ∫ dτ( 1/2(d𝖷/dτ)^2 + Ψd/dτΨ + 1/2 h'^2(𝖷) + h”(𝖷) ΨΨ), which is invariant under the supersymmetry transformations δ𝖷 = ϵΨ- ϵΨ , δΨ = ϵ( -d𝖷/dτ + h'(𝖷) ), δΨ = ϵ( + d 𝖷/dτ + h'(𝖷) ). In (<ref>), we have h”(𝖷) = g_YM^2/r_0^2 κ𝖷^2, but then we need a term of the form h'^2 ∼𝖷^6. We also have an extra Higgs potential in the action. We consider the BPS limit so that only the sixth-order term of the potential is nonzero, and choose the coefficient of the sixth-order term so that there is a supersymmetry in our theory. We thus modify (<ref>) as S = ∫ dτ( Ψ( d/dτ + i A ) Ψ - iκ/g_YM^2 A ) + ∫ dτ( 1/2( d 𝖷/dτ)^2 + g_YM^2/r_0^2 κ𝖷^2 ΨΨ + g_YM^4/18 r_0^4 κ^2𝖷^6 ). Therefore, we have h(𝖷) = g_YM^2/12r_0^2κ𝖷^4 = 1/12 r_0^2 k𝖷^4, as the superpotential. Again, it is interesting that the superpotential is proportional to 1/r_0^2 k. What kind of potential do we need to add to the 3d Chern-Simons Higgs theory so that the reduced action in the BPS limit is (<ref>)? We consider the following potential in 3-dimensions V = λ/4 (Φ^a Φ^a - v^2)^2 + K (Φ^a Φ^a)^3, where K is to be determined, and we ignore the double-well term for now. The dimensional reduction of the 6^th order term yields 1/g_YM^2∫_ℳ d^3x √(|g|) K (Φ^a Φ^a)^3 = 4π r_0^2 K/g_YM^2∫ dr Φ^6. Recalling the definition of 𝖷 in terms of Φ: 𝖷 = 2√(π)r_0/g_YMΦ, one gets 1/g_YM^2∫_ℳ d^3x √(|g|) K (Φ^a Φ^a)^3 = g_YM^4 K/(4π r_0^2)^2∫ dr 𝖷^6. We thus see that if K = 8π^2/9κ^2, then the dimensional reduction of the potential will yield just the right term to have a supersymmetry in the reduced action. So, in 3-dimensions, starting from S = iκ/8π g_YM^2∫_ℳ d^3x ε^ijk( A_i^a ∂_j A_k^a + ε^abc/3 A_i^a A_j^b A_k^c ) + 1/g_YM^2∫_ℳ d^3x √(|g|)( 1/2 g^ij D_i Φ^a D_j Φ^a + V(Φ) ), with V = λ/4 (Φ^a Φ^a - v^2)^2 + 8π^2/9κ^2 (Φ^a Φ^a)^3, one gets the following action in the BPS limit, upon dimensional reduction S = ∫ dτ( 1/2𝖷̇^2 + Ψ(∂_τ + i A) Ψ - ik A ) + ∫ dτ( 1/r_0^2 k𝖷^2 ΨΨ + 1/18 r_0^4 k^2𝖷^6 ), with k ≡κ/g_YM^2 an integer. This action governs the spherically symmetric monopole solutions in the theory defined by the action (<ref>). Thus, we see that the 't Hooft-Polyakov instantons on ℳ which has the geometry of ℝ× S^2 that possess spherical symmetry are effectively described by a supersymmetric quantum mechanics. This is interesting in that one may understand the quantization of spherically symmetric monopoles by studying a supersymmetric theory in 1d with a superpotential of the form h(𝖷) ∼𝖷^4. Let us discuss the monopoles in this theory. The addition of a sixth-order term in the potential does not change the fact that ℱ_ij^(1) in (<ref>) is 0 for all solutions to the equations of motion, nor does it change the expression for ℱ_ij^(2). But now, it has a very interesting interpretation due to supersymmetry. We have Φ_flux = -4π/kΨΨ≡ -4π/k F, where F is the fermion number operator. In the context of supersymmetric quantum mechanics, the Witten index <cit.> defined by ℐ = ( (-1)^F e^-β H), gives information about the ground state structure of the supersymmetric quantum theory. According to our dictionary, the flux of a monopole in Chern-Simons Higgs theory is related to the fermion number operator F, which in turn is related to the ℤ_2 grading operator (-1)^F, which in turn is related to the Witten index. § DIMENSIONAL REDUCTION OF EUCLIDEAN SL(2,C) CHERN-SIMONS HIGGS THEORY Let us consider what happens when we reduce the SL(2,ℂ) Chern-Simons Higgs theory through our ansatz. We consider S = t/8π g_YM^2∫_ℳ d^3x ε^ijk( 𝒜_i^a ∂_j 𝒜_k^a + ε^abc/3𝒜_i^a 𝒜_j^b 𝒜_k^c ) + 1/g_YM^2 ∫_ℳ d^3x √(|g|)( 1/2 g^ij𝒟_i Φ^a 𝒟_j Φ^a + V(Φ) ) + t/8π g_YM^2∫_ℳ d^3x ε^ijk( 𝒜_i^a ∂_j 𝒜_k^a + ε^abc/3𝒜_i^a 𝒜_j^b 𝒜_k^c ) + 1/g_YM^2∫_ℳ d^3x √(|g|)( 1/2 g^ij𝒟_i Φ^a 𝒟_j Φ^a + V(Φ) ), where the connection is a doublet of the form (𝒜, 𝒜) = (𝒜^a T^a, 𝒜^a T^a) and similarly for the adjoint field (Φ,Φ) = (Φ^a T^a, Φ^a T^a). The Higgs field is in the adjoint representation, hence 𝒟_i Φ^a = ∂_i Φ^a + ε^abc𝒜_i^b Φ^c, 𝒟_i Φ^a = ∂_i Φ^a + ε^abc𝒜_i^b Φ^c. The same story unfolds with the following ansatzë: 𝒜^a = [ (φ_2 - 1) ε_i^ akx_k/r^2 + φ_1 δ^⊥ a_i/r + A x_ix^a/r^2] dx^i, 𝒜^a = [ (φ_2 - 1) ε_i^ akx_k/r^2 + φ_1 δ^⊥ a_i/r - Ax_ix^a/r^2] dx^i, Φ^a = x^a/rΦ, Φ^a = x^a/rΦ, e^a = [ -ε^a_ikx^k/r^2 e_1 + δ^⊥ a_i e_2/r + x_ix^a/r^2 e_3 ] dx^i, where all of the ansatz fields depend only on r = (x_ix_i)^1/2. We take t = iκ and t = -iκ, and invariance under large gauge transformations enforces the parameter κ/g_YM^2 to be an integer. The action reduces to S = iκ/g_YM^2∫ dr ( φ_1 φ_2' - φ_1' φ_2 + A (φ_1^2 + φ_2^2 - 1) ) + 4π/g_YM^2∫ dr ( (e_1^2+e_2^2)/2Φ'^2 + (φ_1^2 + φ_2^2) Φ^2 + λ (e_1^2+e_2^2)/4 (Φ^2 - v^2)^2) - iκ/g_YM^2∫ dr (φ_1 φ_2' - φ_1' φ_2 - A (φ_1^2 + φ_2^2 - 1) ) + 4π/g_YM^2∫ dr ( (e_1^2+e_2^2)/2Φ'^2 + (φ_1^2 + φ_2^2) Φ^2 + λ (e_1^2+e_2^2)/4 (Φ^2 - v^2)^2). We define the complex scalars Y_1 = φ_1 + i φ_2 ; Y_2 = φ_1 - iφ_2, Y_1 = φ_1 - iφ_2 ; Y_2 = φ_1 + i φ_2. Fixing e_3 =1 and e_1^2 +e_2^2 = r_0^2, the action becomes S = κ/g_YM^2∫ dr ( Y_1 (∂_r + iA) Y_1 - iA ) + 4π/g_YM^2∫ dr ( r_0^2/2Φ'^2 + Φ^2 Y_1 Y_1 + λ r_0^2/4 (Φ^2 - v^2)^2 ) + iκ/g_YM^2∫ dr ( Y_2 (∂_r + iA) Y_2 - i A) + 4π/g_YM^2∫ dr ( r_0^2/2Φ'^2 + Φ^2 Y_2 Y_2 + λ r_0^2/4 (Φ^2 - v^2)^2). To the complex scalars Y_1,Y_2, we associate Grassmanian fields ψ_1,ψ_2; and consider the action S = 1/g_YM^2∫ dr ( 4π r_0^2 Φ'^2 + κψ_1 (∂_r + iA) ψ_1 + 8πΦ^2 ψ_1 ψ_1 + 4πλ r_0^2/4 (Φ^2 - v^2)^2 ) + 1/g_YM^2∫ dr ( 4π r_0^2 Φ'^2 + κψ_2 (∂_r + iA) ψ_2 + 8πΦ^2 ψ_2 ψ_2 + 4πλ r_0^2/4 (Φ^2 - v^2)^2) - iκ/g_YM^2∫ dr (A + A). Just as in the SU(2) case, we identify τ=r, and define Ψ_1(τ) = √(κ)/g_YMψ(τ) ; Ψ_2(τ) = √(κ)/g_YMχ(τ), 𝖷_1(τ) = 2√(π)r_0/g_YMΦ(τ) ; 𝖷_2(τ) = 2√(π)r_0/g_YMΦ(τ), A_1(τ) = A(τ) ; A_2(τ) = A(τ), to write the action as S = ∫ dτ∑_j=1^N_f=2( 1/2𝖷̇_j^2 + Ψ_j (∂_τ + i A_j) Ψ_j - iκ/g_YM^2 A_j + g_YM^2/r_0^2 κ𝖷_j^2 Ψ_j Ψ_j + V(𝖷_j) ). The new double well potential, of which the expectation of the Higgs field is scaled up by √(4π r_0^2/g_YM^2), is given as V(𝖷_j) = g_YM^2λ/16π r_0^2( 𝖷_j^2 - 4π r_0^2 v^2/g_YM^2). In the 3-dimensional theory, we can use the Schwinger gauge x^i 𝒜_i^a = x^a (A - A) = 0. This changes the above theory in an interesting way S = ∫ dτ∑_j=1^N_f =2( Ψ_j (∂_τ + i A) Ψ_j + 1/2𝖷̇_j^2 + g_YM^2/r_0^2κ𝖷_j^2 Ψ_j Ψ_j + V(𝖷_j) ) - 2iκ/g_YM^2∫ dτ A. The 2 in front of the 1d Chern-Simons term also appeared in the dimensional reduction of the SL(2,ℂ) pure Chern-Simons theory. We see that if k = κ/g_YM^2 is quantized to half-integers, then the Chern-Simons term is invariant under gauge transformations mod 2π m with m ∈ℤ. So we have a relaxed quantization that allows for fractional Chern-Simons level. Note that k is quantized to be an integer in 3d, so we need to choose k integer in the reduced action as well. This does not spoil the gauge invariance of (<ref>), yet it is interesting that one can allow for fractional Chern-Simons level in the reduced theory. On another route, we can scale A A/2, then the fermion and the Chern-Simons terms become I_Fermion + I_Gauge = ∫ dτ∑_j=1^N_f=2Ψ_j ( ∂_τ + i/2 A ) Ψ_j - iκ/g_YM^2∫ dτ A. The fermions have charge 1/2 under the U(1) gauge symmetry. The transformations that leave the action invariant up to 2π i m are Ψ e^iΛ/2Ψ, A A - ∂_τΛ = e^iΛ/2 ( A - 2i∂_r ) e^-iΛ/2. The gauge transformation of A is now unusual, because of the 2. Under a large gauge transformation of winding number n = 1/2π∫ dτ∂_τΛ, the action changes by δ I_Gauge = 2π i k n, so now, the quantization condition is k ∈ℤ, but the fermions are charged with 1/2. So we have two descriptions, in the first, fermions are charged 1 under U(1), and the parameter k is a half-integer; whereas in the second, fermions are charged 1/2 and the parameter k is an integer. §.§ The Reduced Action as a Supersymmetric Quantum Mechanics We modify the potential in (<ref>) as V(𝖷_j) = g_YM^2λ/16π r_0^2( 𝖷_j^2 - 4π r_0^2 v^2/g_YM^2) + 1/18 r_0^4 k^2𝖷_j^6, so that in the BPS limit, we have the two superpotentials h_1(𝖷_1) = 1/12 r_0^2 k𝖷_1^4, h_2(𝖷_2) = 1/12 r_0^2 k𝖷_2^4. The 3d action is then S = iκ/8π g_YM^2∫_ℳ d^3x ε^ijk( 𝒜_i^a ∂_j 𝒜_k^a + ε^abc/3𝒜_i^a 𝒜_j^b 𝒜_k^c ) + 1/g_YM^2∫_ℳ d^3x √(|g|)( g^ij/2𝒟_i Φ^a 𝒟_j Φ^a + λ/4 (Φ^a Φ^a - v^2)^2 + 8π^2/9κ^2 (Φ^a Φ^a)^3 ) + iκ/8π g_YM^2∫_ℳ d^3x ε^ijk( 𝒜_i^a ∂_j 𝒜_k^a + ε^abc/3𝒜_i^a 𝒜_j^b 𝒜_k^c ) + 1/g_YM^2∫_ℳ d^3x √(|g|)( g^ij/2𝒟_i Φ^a 𝒟_j Φ^a + λ/4 (Φ^a Φ^a - v^2)^2 + 8π^2/9κ^2 (Φ^a Φ^a)^3 ), And now there are two distinct Abelian field strengths ℱ_ij = ℱ_ij^(1) + ℱ_ij^(2) ℱ_ij = ℱ_ij^(1) + ℱ_ij^(2) The first ones will be zero by the field equations, and the second ones will have the form ℬ_i^(2) = - x_i/r^2 (φ_1^2 + φ_2^2), ℬ_i^(2) = - x_i/r^2 (φ_1^2 + φ_2^2). The associated fluxes are Φ_flux = - 4π/kΨ_1 Ψ_1, Φ_flux = - 4π/kΨ_2 Ψ_2. And hence the total flux reads Φ_flux + Φ_flux = -4π/k∑_iΨ_i Ψ_i. § SEMICLASSICAL LIMIT OF QUANTUM CHERN-SIMONS HIGGS THEORY By a semiclassical argument, we were able to show that the pure quantum Chern-Simons theory in the large κ limit is equivalent to the 1d reduced theory through spherical symmetry. How do things unveil for the Chern-Simons Higgs theory? It is considerably more involved to show what we have shown for the pure Chern-Simons theory. To sketch the level of involvement for carrying out the semiclassical evaluation of the Chern-Simons Higgs theory, we argue based on a 0-dimensional QFT. Consider the following path integral Z = ∫ dx dy e^-f(x,y). We denote the space of solutions δ f = 0 as ℳ. For a pair (x_n,y_n) ∈ℳ, one has f(x,y) = f(x_n,y_n) + 1/2∂_i ∂_j f(x_n,y_n) δ x^i δ x^j, with i=(1,2) and δ x^i = x^i - x_n^i. One then takes the semiclassical limit Z ≈∑_(x_n,y_n)∈ℳ e^-f(x_n,y_n)∫_B(x_n,y_n) d[δ x^1] d[δ x^2] e^-1/2∂_i ∂_j f(x_n,y_n) δ x^i δ x^j. To carry out this integral in the semiclassical limit, one would need to diagonalize the Hessian H_ij = ∂_i ∂_j f(x_n,y_n), but this is easier said than done. First of all, one must ensure that H ≠ 0. Even in that case, it is highly non-trivial to generalize this to the 3-dimensional Chern-Simons Higgs theory. For one thing, in the 0-dimensional theory, the fields x,y do not have extra structure on them, they are just variables. On the other hand, for a QFT in 3-dimensions, the fields can have index structures allowed by the dynamical and internal symmetries that are proposed to be present in the theory. For example, in the 3d Chern-Simons Higgs theory, one has a Lie-algebra valued 1-form gauge potential A and an adjoint valued scalar field Φ. It is not clear how one can combine these fields with different index structures when attempting to diagonalize the functional Hessian ℋ = δ^2 S_CS-Higgs/δ𝒜δ𝒜 with 𝒜 representing the gauge and the Higgs fields. But even before these discussions, one can make the following observation: In demonstrating that the 3d Chern-Simons is equivalent to the 1d reduced theory, an essential part of the argument was the assumption that the volumes of the solution spaces in a given topological sector n did not depend on the sector n. Because of this fact, we were able to drop the volume factors from the discrete sum over n. We later showed that the volumes of the solution spaces for the Chern-Simons theory were indeed independent of n, and essentially the reason this happens in Chern-Simons theory is because the solution space admits a group structure. Two flat connections in the same topological sector 𝒜_1, 𝒜_2 ∈ℳ_CS^n can always be related by a topologically trivial gauge transformation g_0 ∈𝒢_0, so that each topological sector can be generated by the orbit of a particular flat connection under the group 𝒢_0. It is easy to see that the solution space of Chern-Simons Higgs theory does not have the group structure described above. So, if one has the intention of showing that the semiclassical limit of quantum Chern-Simons Higgs theory is equivalent to the semiclassical limit of the supersymmetric quantum mechanics, one needs a different approach. Even though we cannot say whether there is an exact equivalence between the quantum Chern-Simons and the reduced supersymmetric quantum mechanics, we can sketch an argument to show that they are at least approximately equal to each other. Here, by supersymmetric quantum mechanics, we actually mean the reduced action with only bosons (<ref>) (or (<ref>) for SL(2,ℂ)): a real and a complex field (two real and two complex fields for SL(2,ℂ)), with the complex bosons having the kinetic term of a fermion. At the level of classical equations of motion, one can associate the fermionic and the bosonic versions with a dictionary, but at the quantum theory, things would not work out smoothly because there is a big difference between a bosonic path integral and a fermionic path integral. The argument we sketch for establishing the approximate equivalence between (<ref>) and (<ref>) is as follows: First of all note that one can recast the classical solutions of (<ref>) as some subset of the classical solutions of (<ref>). Let us approximate the path integral of (<ref>) by summing over only the classical solutions that possess spherical symmetry Z_CS-Higgs = ∫𝒟 A 𝒟Φ e^-S_CS-Higgs[Φ, A] ≈∑_(Φ_cl, A_cl) ∈ℳ_CS-Higgs^r e^-S_CS-Higgs[Φ_cl,A_cl]( S_CS-Higgs”[Φ_cl, A_cl] )^-1/2. The determinant of the Hessian of the action is symbolic, and Φ_cl and A_cl are classical solutions of the action S_CS-Higgs, and ℳ_CS-Higgs^r denotes the solution space of the Chern-Simons Higgs theory with spherical symmetry. This is a weaker approximation than the one if we were to add over ℳ_CS-Higgs in the semiclassical regime. Now, since we are considering only the spherically symmetric solutions, the discrete sum will give the semiclassical approximation of the quantum theory based on the action (<ref>). Therefore, we write Z_CS-Higgs∼ Z_CS-Higgs + spherical. This is a weaker relation between a 3d Chern-Simons matter theory and its reduced version, but it nonetheless tells us that the dimensionally reduced action is not entirely unrelated to the main 3-dimensional theory at the quantum level. The same arguments can be sketched for the SL(2,ℂ) Chern-Simons Higgs theory. § CONCLUSIONS AND DISCUSSIONS Let us give an overview of our results and discuss some of their implications. In the first part of the paper, we focused on pure Chern-Simons gauge theories with the gauge groups SU(2) and SL(2,ℂ), defined on a manifold ℳ. We have shown that the dimensional reduction of these theories with spherical symmetric ansatzë imposed on the gauge fields reduces to 1-dimensional quantum field theories but with a fermionic kinetic term for a bosonic field. We discussed that at the classical level, one can construct a dictionary between the classical solutions of the 3d Chern-Simons theory, and those of a fermionic quantum mechanical model, which is essentially a Chern-Simons Dirac theory in 1d. This dictionary, however, does not extend to the quantum Chern-Simons theory because a path integral with bosonic variables differs from that with Grassmanian variables. However, as we have argued, one can establish a duality at the quantum level between the quantum Chern-Simons theory and the dimensionally reduced theory with bosons having a fermionic kinetic term. This is the statement of equation (<ref>). We further show that the observables of the 3d quantum Chern-Simons theory, which are given by products of Wilson loop operators, must reduce to some observables of the reduced 1-dimensional theory. In fact, one can read from our argument that the quantum expectation value of any gauge invariant functional must reduce, in the large κ limit, to some observable in the reduced quantum theory. To what kind of observables do the Wilson loops correspond in 1d is an open problem. This correspondence between two quantum theories at the semiclassical limit is reminiscent of the AdS/CFT duality that has been very influential in the past 25-30 years <cit.>. Our duality is in similar spirit to holography, yet there are differences. For one thing, in AdS/CFT, the correspondence is between a quantum gravity in AdS and a conformal field theory in the boundary of the AdS space. Our correspondence relates the quantum theory of 3d Chern-Simons theory and a 1d QFT for which the action can be obtained by imposing the spherical symmetry assumption on the gauge field of 3d Chern-Simons theory. One other difference is that AdS/CFT relates theories for which one lives in a space-time dimensionality one higher than the other. In our case, we related a 3d QFT with a 1d QFT, so the difference between the dimensions is 2. It is an outstanding problem to understand, via AdS/CFT, to which 2d CFT the reduced 1d QFT corresponds. Let us give a final comment about the duality established in section <ref>. The establishment of this duality relies crucially on the group structure of the solution space of Chern-Simons theory. Because of the group structure, we were able to take the volume factors (volume of the solution space) out of the discrete sum, which is what enabled us to drop them as they are overall constants that can be fed into the normalization of the path integral. It is not very hard to see that any quantum field theory based on an action for which the space of critical points of the action admits a group structure will allow similar dualities. An interesting and ambitious problem is then to find all QFTs for which the space of critical points of the action admits a group structure. For these theories, one can carry out the argument given in section <ref> to show that such QFTs would have exact duals, in the semiclassical limit, to lower dimensional theories whose actions would be given by the dimensional reduction of the master theory by a symmetry ansatz. Let us now talk about the second part of the paper, which was concerned with Chern-Simons Higgs theories and their monopoles. It is well known that in these theories monopoles with non-trivial flux must necessarily have divergent Euclidean action, as we demonstrated in section <ref>. Upon dimensional reduction by spherically symmetric ansatz for monopoles, we get a 1-dimensional action that governs these monopoles. By choosing a curved background with the geometry of ℝ× S^2, we were able to remove the explicit r-dependencies (or time dependencies if one were to view the reduced action as a quantum mechanical action) from the reduced action. Moreover, we discussed that one could associate the classical solutions of the reduced action and the classical solutions of a fermionic theory coupled to a real scalar field. In this associated theory, one could get supersymmetry by adding a six-order potential in the bosonic field. We show that by adding a six-order potential to the Chern-Simons Higgs theory in 3d, we can view the classical reduced equations of this theory as the equations arising from a supersymmetric quantum mechanics. To understand the association of the Chern-Simons Higgs monopoles with the supersymmetric quantum mechanics, we look at the magnetic flux of the monopole. This corresponds, in the supersymmetric quantum mechanics, to the fermion number operator F = ΨΨ, which in turn is related to the Witten index ℐ = ( (-1)^F e^-β H). Similar conclusions hold for the SL(2,ℂ) Chern-Simons Higgs theory, with two distinct actions that are copies of each other. In the last section, we talk about the semiclassical evaluation of the Chern-Simons Higgs theory. We give a discussion about the difficulties arising from the interactions between the gauge field and the Higgs field by studying a 0-dimensional toy model to give a flavor of the challenges. We further demonstrate that it is not likely to get an exact quantum duality at the semiclassical limit, and this is because the solution space of the Chern-Simons Higgs action does not admit a group structure. As we discussed 2 paragraphs back, when the solution space does not admit a group structure it does not seem, at least within the lines of section <ref>, very promising to attempt to show an exact duality at the semiclassical level. Although these results discourage the hope of getting an exact duality, this is not to say that one cannot get an approximate relation between the 3d Chern-Simons Higgs theory and the reduced theory. We sketch a saddle point argument where we show that the path integral of the reduced theory approximates, to some degree but not exactly, the path integral of the Chern-Simons Higgs theory, in the semiclassical limit. In showing this, we approximate the path integral of the Chern-Simons Higgs theory by summing over only the classical solutions that possess spherical symmetry. Considering only the spherically symmetric classical solutions is of course not good enough, one would ideally sum over all classical solutions. However, one can say summing over only those classical solutions that possess spherical symmetry is an approximation for the process of summing over all classical solutions. In this approximation, one may use the fact that dimensionally reduced theory is simpler (since it is a lower dimensional theory) to evaluate the contributions to the full path integral, in the semiclassical limit, from the spherically symmetric configurations. With all these interesting results, questions, and possible directions to follow that were unearthed from the study of dimensional reduction in the context of Chern-Simons gauge theories, we remark that these techniques should receive some attention. In particular, the result of section <ref> gives us a very interesting duality, which is not hard in terms of computation, but it is subtle to arrive at the steps that led to the result. Our motivation to establish a connection between the quantum theories of 3d and 1d theories was what led us to this computation, which a priori was expected to be only approximate. To much of our surprise, we found an exact agreement. The techniques and ideas used in this paper would be applicable to various theories as alluded to above. § APPENDIX § THE SPHERICALLY SYMMETRIC ANSATZ Throughout the paper, we employ the following ansatzë: A_i^a = ε_i^ akx_k/r^2(φ_2(r) - 1 ) + δ^⊥ a_i/rφ_1(r) + x_ix^a/r^2 A(r), e_i^a = -ε^a_ikx^k/r^2 e_1(r) + δ^⊥ a_i e_2(r)/r + x_ix^a/r^2 e_3(r), Φ^a = x^a/rΦ(r), where δ^⊥ a_i = ( δ_i^a - x_ix^a/r^2). We note the following identities that will be helpful ε_iakδ^⊥ ak = 0 = ε_iak x^a x^k, δ_i^⊥ a x_i = 0 ; δ_i^⊥ aδ_a^⊥ j = δ_i^⊥ j ; δ_i^⊥ i = 2, ε_ajlδ_i^⊥ a x^l = ε_ijl x^l, along with standard rules of calculus such as ∂_i f(r) = x_i/r f'. We will mean a derivative with respect to r whenever there is a prime. If there is a dot, it is a derivative with respect to either time t or the Euclidean time τ. With these, one calculates ε^ijk∂_j A_k^a = ε^iakx_k/r^2{- (φ'_1 - A) } + δ^⊥_ia/r{φ'_2 } + x_ix_a/r^4{ 2( φ_2-1 ) }, ε^ijkε_abc A_j^b A_k^c = ε^i_ akx^k/r^2{ 2(φ_2 - 1) A } + δ^⊥ i_a/r{ 2φ_1 A } + x^ix_a/r^4{ 2( (φ_2-1)^2 + φ_1^2 ) }. Contracting these with A_i^a, one gets ε^ijk A_i^a ∂_j A_k^a = 2/r^2( -(φ_2 - 1) (φ'_1 - A) + φ_1 φ'_2 + A (φ_2-1) ) = 2/r^2( φ_1 φ'_2 - φ'_1 (φ_2-1) + 2A (φ_2-1) ), ε^ijkε_abc A_i^a A_j^b A_k^c = 2/r^2( 2A (φ_2-1)^2 + 2A φ_1^2 + A ( (φ_2-1)^2 + φ_1^2 ) ) = 6/r^2 A ( (φ_2-1)^2 + φ_1^2 ). For the dreibein, one has ε^ijkε_abc e_j^b e_k^c = ε^iakx_k/r^2{ 2 e_1 e_3 } + δ^⊥ ia/r{ 2e_2 e_3 } + x^i x^a/r^4{ 2 ( e_1^2 + e_2^2 ) }, contracting this with e_i^a, one gets e ≡ e = 1/3!ε^ijkε_abc e_i^a e_j^b e_k^c = 1/r^2 e_3 ( e_1^2 + e_2^2 ). One can find the metric using g_ij = e_i^a e_j^b δ_ab = e_i^a e_ja =( -ε^a_ikx^k/r^2 e_1(r) + δ^⊥ a_i e_2(r)/r + x_ix^a/r^2 e(r) ) ( -ε_ajlx^l/r^2 e_1(r) + δ^⊥_jae_2(r)/r + x_jx_a/r^2 e(r) ). Explicitly, this reads g_ij = ε^a_ikε_ajlx^k x^l/r^4 e_1^2 -ε^a_ikδ^⊥_jax^k/r^2 e_1(r) e_2(r) -ε^a_ikx^kx_jx_a/r^4 e_1(r) e_3(r) - δ^⊥ a_i ε_ajlx^l/r^3 e_1(r) e_2(r) + δ^⊥ a_i δ^⊥_jae^2_2(r)/r^2 + δ^⊥ a_i x_jx_a/r^2 e_2(r) e_3(r) - ε_ajlx_ix^ax^l/r^4 e_3(r)e_1(r) + δ^⊥_jax_ix^a/r^3 e_3(r) e_2(r) + x_ix_j x_a x^a/r^4 e_3^2(r). Using the identities in (<ref>), one arrives at g_ij = (δ_ijδ_kl - δ_ilδ_kj) x^kx^l/r^4 e_1^2(r) - ε_jilx^l/r^2 e_1(r) e_2(r) - ε_ijlx^l/r^2 e_1(r) e_2(r) + δ^⊥_ij e^2_2(r) + x_i x_j/r^2 e_3^2 = 1/r^2( δ^⊥_ij (e_1^2 + e_2^2) + x_ix_j e_3^2 ) = e_1^2+e_2^2/r^2δ_ij + ( e_3^2 - e_1^2+e_2^2/r^2) x_ix_j/r^2. One can also compute the inverse vierbein, which is defined by the consistency condition δ_i^j = e_i^a E^j_a. Let us write E as: E_a^j = - ε_a^ jkx_k/r^2 E_1 + δ^⊥ j_a E_2/r + x^jx_a/r^2 E_3, thus, one has δ_i^j = e_i^a E^j_a = ( -ε^a_ikx^k/r^2 e_1 + δ^⊥ a_i e_2/r + x_ix^a/r^2 e_3 ) ( - ε_a^ jlx_l/r^2 E_1 + δ^⊥ j_a E_2/r + x^jx_a/r^2 E_3 ), explicitly, this is δ_i^j = ε^a_ikε_a^ jlx^kx_l/r^4 e_1E_1 - ε^a_ikδ^⊥ j_a x^k/r^3 e_1E_2 - ε^a_ikx^kx^jx_a/r^4e_1 E_3 - ε_a^ jkδ^⊥ a_i x_k/r^3 e_2E_1 + δ^⊥ j_a δ^⊥ a_i e_2E_2/r^2 + δ_i^⊥ ax^jx_a/r^3 e_2 E_3 -ε_a^jkx_kx_ix^a/r^4 e_3 E_1 + δ^⊥ j_a x_ix^a/r^3 e_3 E_2 + x_i x^a x^j x_a/r^4 e_3 E_3. Using the identities in (<ref>), one gets δ_i^j = (δ_i^j δ^l_k - δ^l_i δ^j_k) x^k x_l/r^4 e_1E_1 - ε^j_ ikx^k/r^3 e_1E_2 - 0 - ε_i^ jkx_k/r^3 e_2E_1 + δ^⊥ j_i e_2E_2/r^2 + 0 0 + 0 + x_i x^j/r^2 e_3 E_3. Simplifying further δ^j_i = δ_i^j e_1E_1/r^2 - x^jx_i/r^4 e_1E_1 - ε^j_ikx^k/r^3 (e_1E_2 - e_2 E_1) + δ_i^j e_2E_2/r^2 - x_ix^j/r^4 e_2E_2 + x_ix^j/r^2 e_3 E_3 = δ^j_i e_1E_1 + e_2E_2/r^2 + x_ix^j/r^2( e_3 E_3 - e_1E_1+e_2E_2/r^2) - ε^j_ikx^k/r^3 (e_1E_2 - e_2 E_1). This gives three equations r^2 = e_1E_1 + e_2E_2 , 0 = e_3 E_3 - e_1E_1+e_2E_2/r^2, 0 = e_1E_2 - e_2 E_1, Using the first one in the second gives E_3 = 1/e_3. Using the third equation, we write E_2 = E_1e_2/e_1 and input this into the first one r^2 = e_1 E_1 + e_2^2E_1/e_1 = (e_1^2+e_2^2)E_1/e_1, hence E_1 = e_1 r^2/e_1^2+e_2^2 ; E_2 = e_2 r^2/e_1^2+e_2^2. So, all three ansatz functions that constitute a spherically symmetric inverse of the dreibein E_a^i are found. The form of E_a^i in terms of little es is given by E_a^j = - ε_a^ jk x_k e_1/e_1^2+e_2^2 + δ^⊥ j_a r e_2/e_1^2+e_2^2 + x^jx_a/r^21/e_3. It is almost a trivial manner to compute the inverse of the metric now. We will use the structural similarity of e and E to write g^ij = E^i_a E^j_b δ^ab = 1/r^2( δ^⊥ ij (E_1^2+E_2^2) + x^i x^j E_3^2 ), in terms of little es g^ij = δ^ijr^2/e_1^2+e_2^2 + x^i x^j ( 1/r^2 e_3^2 - 1/e_1^2 + e_2^2). We note that x^a ≡ e_i^a x^i. On the other hand, if we use the spherically symmetric ansatz, we get x^a = e_3 x^a, which compels us to set e_3 = 1. The metric now reads g_ij = e_1^2+e_2^2/r^2δ_ij + ( 1 - e_1^2+e_2^2/r^2) x_ix_j/r^2, and the line element ds^2 = e_1^2+e_2^2/r^2 (dr^2 + r^2 dΩ^2) + ( 1 - e_1^2+e_2^2/r^2) dr^2 = dr^2 + (e_1^2+e_2^2) dΩ^2. Where we've used dr = x_i dx^i/r. Let us now record some results involving the adjoint scalar field ∂_i Φ^a = δ^⊥ a_iΦ/r + x_ix^a/r^2Φ', ε^abc A_i^b Φ^c = ε^abc( (φ_2 - 1) ε_i^ bkx_k/r^2 + φ_1 δ^⊥ b_i/r + A x_ix^b/r^2) x^c/rΦ = ( δ^a_iδ_ck - δ^a_kδ_ci ) x_kx_c/r^3 (φ_2-1) Φ - ε_i^ acx_c/r^2φ_1 Φ + ε^abcx_ix_bx_c/r^2( ⋯) = δ^⊥ a_i/r (φ_2-1)Φ - ε_i^ acx_c/r^2φ_1 Φ. Thus, the covariant derivative is given by D_i Φ^a = ∂_i Φ^a + ε^abc A_i^b Φ^c = ε_i^ acx_c/r^2{ - φ_1 Φ} + δ^⊥ a_i/r{φ_2 Φ} + x_ix^a/r^2{Φ' }. One computes the following contractions 1/2δ^ij D_i Φ^a D_j Φ^a = 1/r^2( r^2/2Φ'^2 + (φ_1^2 + φ_2^2) Φ^2 ), 1/2 x^i x^j D_i Φ^a D_j Φ^a = 1/2Φ'^2 . Therefore, the kinetic term of a Higgs field in a spherically symmetric space reduces to 1/2 g^ij D_i Φ^a D_j Φ^a = 1/2r^2/e_1^2+e_2^2δ^ij D_i Φ^a D_j Φ^a + 1/2( 1/r^2 - 1/e_1^2+e_2^2) x^i x^j D_i Φ^a D_j Φ^a = 1/2Φ'^2 + 1/e_1^2+e_2^2 (φ_1^2 + φ_2^2) Φ^2. In particular, one has ∫ d^3x √(|g|)1/2 g^ij D_i Φ^a D_j Φ^a = 4π∫ dr (e_1^2+e_2^2) ( 1/2Φ'^2 + 1/e_1^2+e_2^2 (φ_1^2 + φ_2^2) Φ^2 ) = 4π∫ dr ( 1/2 (e_1^2+e_2^2) Φ'^2 + (φ_1^2+ φ_2^2) Φ^2 ). Similarly, for any spherically symmetric integrand, it follows that ∫ d^3x √(|g|)[ ⋯] = 4π∫ dr r^2 e_1^2+e_2^2/r^2[⋯] = 4π∫ dr (e_1^2+e_2^2) [ ⋯]. In computing the abelian field strength in the direction of symmetry breaking, we will also need ℱ_ij^(2) = - 1/Φ^3ε^abcΦ^a D_i Φ^b D_j Φ^c = -1/Φ^3ε^abcΦx^a/r( ε_i^ bdx_d/r^2{ - φ_1 Φ} + δ^⊥ b_i/r{φ_2 Φ} + x_ix^b/r^2{Φ' }) ×( ε_j^ cex_e/r^2{ - φ_1 Φ} + δ^⊥ c_j/r{φ_2 Φ} + x_jx^c/r^2{Φ' }). We will first compute the corresponding magnetic field ℬ_k^(2) = 1/2Φ^3ε_jikℱ_ij^(2) = -1/2 Φ^3ε_kijε^abcΦ^a D_i Φ^b D_j Φ^c. One finds 1/2ε_kijε^abc D_i Φ^b D_j Φ^c = ε_k^ adx_d/r^2{φ_1 ΦΦ' } + δ_k^⊥ a/r{φ_2 ΦΦ' } + x_k x^a/r^4{ (φ_1^2 + φ_2^2) Φ^2 }, ⟹1/2ε_kijε^abcΦ^a D_i Φ^b D_j Φ^c = x_k/r^2Φ^3 (φ_1^2+φ_2^2), so the magnetic field reads ℬ_k^(2) = -x_k/r^2 (φ_1^2 + φ_2^2). Now, observe that ε_klmℬ_k^(2) = 1/2ε_klmε_kijℱ_ij^(2) = 1/2 (δ_ilδ_jm - δ_imδ_jl) ℱ_ij^(2) = ℱ_lm^(2) , therefore, the abelian field strength is given by ℱ_ij^(2) = ε_ijkℬ_k^(2) = -ε_ijkx_k/r^2 (φ_1^2+φ_2^2). plain
http://arxiv.org/abs/2405.09956v1
20240516100301
Data-Assimilated Crystal Growth Simulation for Multiple Crystalline Phases
[ "Yuuki Kubo", "Ryuhei Sato", "Yuansheng Zhao", "Takahiro Ishikawa", "Shinji Tsuneyuki" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
yuuki.kubo@phys.s.u-tokyo.ac.jp Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan Department of Materials Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan Institute of Materials and Systems for Sustainability, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8601, Japan Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan To determine crystal structures from an X-ray diffraction (XRD) pattern containing multiple unknown phases, a data-assimilated crystal growth (DACG) simulation method has been developed. The XRD penalty function selectively stabilizes the structures in the experimental data, promoting their grain growth during simulated annealing. Since the XRD pattern is calculated as the Fourier transform of the pair distribution function, the DACG simulation can be performed without prior determination of the lattice parameters. We applied it to C (graphite and diamond) and SiO2 (low-quartz and low-cristobalite) systems, demonstrating that the DACG simulation successfully reproduced multiple crystal structures. Data-Assimilated Crystal Growth Simulation for Multiple Crystalline Phases Shinji Tsuneyuki May 20, 2024 ========================================================================== Crystal structure determination is an essential process in materials science to predict and understand the properties of materials. Experimental researchers mainly rely on X-ray diffraction (XRD) measurements or neutron diffraction (ND) spectroscopy to determine crystal structures. To analyze new structures from powder XRD data, we first determine their lattice parameters. Recent developments in crystallographic methodology have made it possible to systematically determine lattice parameters from single-phase XRD patterns <cit.>. On the other hand, because the peak assignment of XRD data significantly increases the number of wrong lattice parameter candidates, it is still challenging to find the correct lattice parameters from XRD data reflecting multiple unknown phases. Computational science has also contributed to crystal structure determination, especially when it is difficult to determine the structure through experimental techniques <cit.>. In computational science, crystal structure has been predicted by finding the global minimum of the physical interatomic potential energy. To overcome the potential energy barrier between local minima and the global minimum, metaheuristics methods based on random sampling <cit.>, particle-swarm optimization <cit.>, genetic algorithms <cit.>, etc., have been employed with first-principles calculations in recent years. Indeed, these methods successfully determined the new structures such as high-temperature superconducting hydrides under ultra-high pressure <cit.>. In addition, combined with machine learning potentials, they can correspond to the variable composition analysis <cit.>. Nevertheless, since these methods still require high computational costs, there is also a demand for computational approaches directly referring to the experimental data. Under such circumstances, one possible way to accomplish rapid and accurate optimization is to supplement theoretical simulations with experimental data <cit.>, and we have developed an experimental data-assimilated molecular dynamics (DAMD) simulation using XRD and ND data. Our previous study confirmed that the DAMD simulation can efficiently determine the target crystalline <cit.> and amorphous <cit.> structures. However, these simulations require the lattice parameters to be determined in advance <cit.> and do not work well when XRD data contain peaks from multiple unknown crystalline phases. In this study, we propose a data-assimilated crystal growth (DACG) simulation method, which is applicable to the structure determination even in those cases. Figure <ref> shows a schematic image of the DACG simulation proposed in this study. Here, we assume that the experimental XRD data contain peaks from multiple unknown phases, and the lattice parameters cannot be readily determined. Therefore, in the DACG simulation, we employ large simulation cells including a few thoudsand atoms to obtain grains of the target crystal structures rather than the perfect crystals. To obtain crystal grains, MD simulations of crystal growth are performed and accelerated by employing the cost function F <cit.>: F(R) = E(R) + α N D[I_ref(Q), I_calc(Q; R)] , where R is the atomic coordinates and E is the interatomic potential energy. I_ref and I_calc are the XRD intensity referred from experimental data and that calculated from a structure in the simulation, respectively. D is the XRD penalty function that represents the dissimilarity between I_ref and I_calc. α is the weight parameter, N is the number of atoms, and Q is the magnitude of the scattering vector. Most of the structures inconsistent with the experimental data are destabilized by the cost function F as shown in Fig. <ref>. Therefore, DA can increase the probability of reaching the target structures through optimization. Following our previous study <cit.>, we used a penalty function D based on the correlation coefficient: D = 1 - ∫Q(I_ref - I_ref)(I_calc - I_calc)/√(∫Q(I_ref - I_ref)^2)√(∫Q(I_calc - I_calc)^2), where I is the averaged XRD intensity over the integral range of Q. By defining D in this way, we can simultaneously compare the peak positions and intensities of I_ref and I_calc. In addition, we adopted the following formula to calculate I_calc: I_calc/L(2θ) ∝∑_μc_μf_μ^2(Q) + ∑_μ, νc_μc_νf_μ(Q)f_ν(Q) ×∫_0^r_cutoffr4π r^2ρ_0[g_μν(r) - 1]sin Qr/Qr, where 2θ is the angle between the incident and scattered X-rays, L(2θ) is the polarization factor, c_μ is the ratio of atomic species μ, f_μ(Q) is the atomic form factor, ρ_0 is the number of atoms per volume, and g_μν(r) is the partial pair distribution function (PDF) between atomic species μ and ν. r_cutoff is the cutoff radius for calculating PDF. Since Eq. (<ref>) does not include any lattice parameters and Laue indices directly, we can perform the DACG simulation without any prior determination of the lattice parameters. Furthermore, this method reduces the computational cost by making the cutoff distance r_cutoff smaller. On the other hand, the drawback of this method is that the resolution of the calculated XRD peak position becomes inversely proportional to r_cutoff. When the DACG simulations are performed using multi-phase XRD data as the reference, a polycrystalline structure containing multiple different crystal structures may be obtained in a single simulation cell. To extract the grains of each phase in those cases, we perform K-means clustering <cit.> based on the site-specific radial distribution function (SSRDF) around each atom i defined as g_i(r) = 1/4π r^2ρ_0∑_j≠ iδ(r-r_ij)f_cut(r), where ρ_0 is the number of atoms per volume, and r_ij is the distance between atoms i and j. Here, f_cut(r) is a smooth cutoff function defined as f_cut(r) = { 1/2[cos(πr/R)+1] (r < R), 0 (r ≥ R), . using cutoff radius R. To test the effectiveness of the DACG simulation, we applied it to the structure determination of C (graphite and diamond) and SiO2 (low-quartz and low-cristobalite) polymorphs. We used the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) <cit.> package and the external code for calculating the penalty function implemented in our previous study <cit.>. The long-range carbon bond order potential (LCBOP) <cit.> for C and the Tsuneyuki potential <cit.> for SiO2 have been employed for the physical interatomic potential energy. The temperature was controlled by the velocity scaling method <cit.>, and the time step was set to 0.5 and 1 for C and SiO2 simulations, respectively. XRD patterns from 0 to 4^-1 were used to calculate the penalty function D. For calculating I_calc from structures during the DACG simulation, we set r_cutoff to 15 and 21-25 for C and SiO2 cases, respectively. As initial atomic configurations, 6000 and 10200 atoms were randomly placed in cubic boxes for C and SiO2 cases, respectively. Here, the simulation cell size was determined so that the atomic density was consistent with the density of each desired crystal structure <cit.>. We performed simulated annealing with the cost function F (DA, data assimilation) and that with the interatomic potential E (SA, normal simulated annealing). First, since the initial configurations were energetically too unstable, we performed short MD simulations with small time steps without the XRD penalty function to relax the structure. Next, the temperatures were set at 14000 and 10000 for C and SiO2, respectively, and then decreased linearly to 0 in 0.5 and 5. The weight parameter α of the XRD penalty function in Eq. (<ref>) was set to 10 for both C and SiO2 cases in this study. First, we performed simulated annealing with the cost function F (Eq. (<ref>)) using single-phase XRD data of graphite (C), diamond (C), low-quartz (SiO2) and low-cristobalite (SiO2). Figures <ref>(a)-(d) show the structure factors S(Q), corresponding to the reference XRD pattern used for the DACG simulations (black dashed lines), and those calculated from the structures obtained by DA (red lines) and SA (blue lines). As shown in Figs. <ref>(a)-(d), no Bragg peaks appear in S(Q) for the structures obtained by SA, suggesting that no ordered structures were obtained. On the other hand, peak positions of S(Q) obtained by DA are comparable with those in the reference S(Q). This suggests that the target structures were successfully obtained by DA owing to the XRD penalty function D. Figures <ref>(e)-(h) show the snapshots of the structures obtained after the DACG simulations. As shown in the snapshots, we successfully obtained the target structures: a layered structure like graphite in Fig. <ref>(e), a three-dimensional network originated from sp^3 hybrid orbitals formed over the whole simulation cell in Fig. <ref>(f), a triple-helical structure with some disordered parts in Fig. <ref>(g), and a network consisting only of six-membered rings in Fig. <ref>(h). These results suggest that the DACG simulations can determine the target structures correctly from single-phase XRD patterns without prior determination of the lattice parameters. Note that the small oscillations appearing in S(Q) in the DA cases are caused by the Fourier transformation of PDF with the finite cutoff distance r_cutoff in Eq. (<ref>). Next, we conducted DACG simulations to determine multiple crystal structures from multi-phase reference XRD data. We used 1:1 mixture of the XRD patterns of ideal graphite and diamond structures as the reference XRD data (black dashed line) as shown in Fig. <ref>(a). The atomic density was set to the average of those of graphite and diamond. Figure <ref>(a) also shows the structure factors S(Q) of the structures obtained by DA and SA. As shown in Fig. <ref>(a), peak positions of S(Q) obtained after the DACG simulation (red solid line) agrees well with those of the ideal graphite and diamond peaks in the reference S(Q), suggesting that the structure obtained by DA contains ordered structures related to the target ones. On the other hand, any peaks in the reference S(Q) were not reproduced by SA, suggesting that SA without the penalty function D did not even yield any ordered structures. We can also see that the structure obtained by the DACG simulation indeed has some ordered structures shown in Fig. <ref>(c). However, it is difficult to conclude from only the snapshot whether this structure contains both graphite and diamond structures simultaneously. Therefore, we conducted K-means clustering to extract crystal grains from the obtained structure. Here, the number of groups for clustering was set to two so that the atoms could be classified into graphite and diamond phases, and the cutoff distance for the SSRDF calculation was set to 5. As the result of the clustering, the atoms were classified into two phases as shown in Figs. <ref>(d) and (e) which form graphite-like and diamond-like structures, respectively. Furthermore, the SSRDFs averaged over the center atoms in Figs. <ref>(d) (red line) and (e) (blue line) are shown in Fig. <ref>(b). As shown in this figure, it is confirmed that there are groups of atoms with different distances to the first and second nearest neighboring atoms and that the clustering successfully distinguished them. Thus, we concluded that, in the case of multi-phase C, graphite and diamond were obtained simultaneously by a single DACG simulation, and the grain of each phase was successfully extracted by K-means clustering using the SSRDF as an atomic fingerprint. In the case of multi-phase SiO2, all the target structures (low-quartz and low-cristobalite) were successfully reproduced by repeating the DACG simulation multiple times. Figure <ref>(a) shows the XRD patterns of ideal low-quartz and low-cristobalite, which were mixed in a 2.5:1 ratio and used as the reference XRD data in the DACG simulations. The mixing ratio of the XRD patterns was determined so that the maximum intensity of low-quartz and low-cristobalite XRD patterns take the same value. The atomic density was set to the average of those of low-quartz and low-cristobalite. Figure <ref>(a) also shows the XRD pattern calculated from the structure obtained after the first DACG simulation. This XRD pattern was calculated using the lattice parameters of the simulation cell and Laue indices to analyze the obtained structure in detail. Note that during the DACG simulation, XRD patterns were still calculated using Eq. (<ref>). As shown in the figure, the first and second peaks of quartz were reproduced by the first DACG simulation. On the other hand, none of the peaks of cristobalite were reproduced by the first DACG simulation. This suggests that the first DACG simulation yielded only quartz-like ordered structures. Figure <ref>(c) shows the snapshot of the structure obtained by the first DACG simulation, in which a quartz-like triple-helical structure exists. To obtain the crystal structures of the remaining phases, we conducted the DACG simulation again using the reference XRD data. However, this time, the powder XRD pattern in the range highlighted in gray in Fig. <ref>(b) was not used for calculating the XRD penalty function D during the simulation, since the structure corresponding to the peaks in those ranges has been already obtained by the previous DACG simulation. Figure <ref>(b) shows the XRD pattern calculated using the lattice parameters of the simulation cell and Laue indices from the structure obtained after the second DACG simulation. As shown in this figure, unlike the first DACG simulation, the second one reproduced the first peak of cristobalite. Furthermore, Fig. <ref>(d) shows a snapshot of the structure obtained by the second DACG simulation, in which a cristobalite-like structure can be seen. Thus, we conclude that in the case of multi-phase SiO2, all the target structures were successfully obtained by the sequential DACG simulations, in which the XRD peaks related to the structures obtained at each DACG simulation are masked one by one. In summary, we proposed a computer simulation method for efficiently growing and finding crystal structures by assimilating powder diffraction data from unknown crystalline phases. This method will help determine the crystal structures of new materials in combinatorial synthesis or under extreme conditions. This work was supported by JSPS KAKENHI Grant numbers JP18H05519, JP24K00544, and JP20H05644. The computation in this work was partly performed by using the facilities of the Supercomputer Center, the Institute for Solid State Physics, The University of Tokyo. Y.K. is supported by MEXT - Quantum Leap Flagship Program (MEXT Q-LEAP). apsrev4-2 43 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Visser(1969)]Visser:a07076 author author J. W. Visser, https://doi.org/10.1107/S0021889869006649 journal journal Journal of Applied Crystallography volume 2, pages 89 (year 1969)NoStop [Werner et al.(1985)Werner, Eriksson, and Westdahl]Werner:a25264 author author P.-E. Werner, author L. Eriksson, and author M. Westdahl, https://doi.org/10.1107/S0021889885010512 journal journal Journal of Applied Crystallography volume 18, pages 367 (year 1985)NoStop [Altomare et al.(2009)Altomare, Campi, Cuocci, Eriksson, Giacovazzo, Moliterni, Rizzi, and Werner]Altomare:ce5059 author author A. Altomare, author G. Campi, author C. Cuocci, author L. Eriksson, author C. Giacovazzo, author A. Moliterni, author R. Rizzi, and author P.-E. Werner, https://doi.org/10.1107/S0021889809025503 journal journal Journal of Applied Crystallography volume 42, pages 768 (year 2009)NoStop [Boultif and Louër(2004)]Boultif:ks0218 author author A. Boultif and author D. Louër, https://doi.org/10.1107/S0021889804014876 journal journal Journal of Applied Crystallography volume 37, pages 724 (year 2004)NoStop [Le Bail(2004)]lebail_2004 author author A. Le Bail, https://doi.org/10.1154/1.1763152 journal journal Powder Diffraction volume 19, pages 249–254 (year 2004)NoStop [Oishi-Tomiyasu(2014)]Oishi-Tomiyasu:fs5064 author author R. Oishi-Tomiyasu, https://doi.org/10.1107/S1600576714000922 journal journal Journal of Applied Crystallography volume 47, pages 593 (year 2014)NoStop [Woodley and Catlow(2008)]Woodley2008 author author S. M. Woodley and author R. Catlow, https://doi.org/10.1038/nmat2321 journal journal Nature Materials volume 7, pages 937 (year 2008)NoStop [Pickard and Needs(2011)]Pickard_2011 author author C. J. Pickard and author R. J. Needs, https://doi.org/10.1088/0953-8984/23/5/053201 journal journal Journal of Physics: Condensed Matter volume 23, pages 053201 (year 2011)NoStop [Wang et al.(2010)Wang, Lv, Zhu, and Ma]PhysRevB.82.094116 author author Y. Wang, author J. Lv, author L. Zhu, and author Y. Ma, https://doi.org/10.1103/PhysRevB.82.094116 journal journal Phys. Rev. B volume 82, pages 094116 (year 2010)NoStop [Wang et al.(2012)Wang, Lv, Zhu, and Ma]WANG20122063 author author Y. Wang, author J. Lv, author L. Zhu, and author Y. Ma, https://doi.org/https://doi.org/10.1016/j.cpc.2012.05.008 journal journal Computer Physics Communications volume 183, pages 2063 (year 2012)NoStop [Oganov and Glass(2006)]10.1063/1.2210932 author author A. R. Oganov and author C. W. Glass, https://doi.org/10.1063/1.2210932 journal journal The Journal of Chemical Physics volume 124, pages 244704 (year 2006)NoStop [Glass et al.(2006)Glass, Oganov, and Hansen]glass_uspexevolutionary_2006 author author C. W. Glass, author A. R. Oganov, and author N. Hansen, https://doi.org/https://doi.org/10.1016/j.cpc.2006.07.020 journal journal Computer Physics Communications volume 175, pages 713 (year 2006)NoStop [Ishikawa and Miyake(2020)]PhysRevB.101.214106 author author T. Ishikawa and author T. Miyake, https://doi.org/10.1103/PhysRevB.101.214106 journal journal Phys. Rev. B volume 101, pages 214106 (year 2020)NoStop [Li et al.(2014)Li, Hao, Liu, Li, and Ma]10.1063/1.4874158 author author Y. Li, author J. Hao, author H. Liu, author Y. Li, and author Y. Ma, https://doi.org/10.1063/1.4874158 journal journal The Journal of Chemical Physics volume 140, pages 174712 (year 2014)NoStop [Duan et al.(2014)Duan, Liu, Tian, Li, Huang, Zhao, Yu, Liu, Tian, and Cui]Duan2014 author author D. Duan, author Y. Liu, author F. Tian, author D. Li, author X. Huang, author Z. Zhao, author H. Yu, author B. Liu, author W. Tian, and author T. Cui, https://doi.org/10.1038/srep06968 journal journal Scientific Reports volume 4, pages 6968 (year 2014)NoStop [Drozdov et al.(2015)Drozdov, Eremets, Troyan, Ksenofontov, and Shylin]Drozdov2015 author author A. P. Drozdov, author M. I. Eremets, author I. A. Troyan, author V. Ksenofontov, and author S. I. Shylin, https://doi.org/10.1038/nature14964 journal journal Nature volume 525, pages 73 (year 2015)NoStop [Einaga et al.(2016)Einaga, Sakata, Ishikawa, Shimizu, Eremets, Drozdov, Troyan, Hirao, and Ohishi]Einaga2016 author author M. Einaga, author M. Sakata, author T. Ishikawa, author K. Shimizu, author M. I. Eremets, author A. P. Drozdov, author I. A. Troyan, author N. Hirao, and author Y. Ohishi, https://doi.org/10.1038/nphys3760 journal journal Nature Physics volume 12, pages 835 (year 2016)NoStop [Peng et al.(2017)Peng, Sun, Pickard, Needs, Wu, and Ma]PhysRevLett.119.107001 author author F. Peng, author Y. Sun, author C. J. Pickard, author R. J. Needs, author Q. Wu, and author Y. Ma, https://doi.org/10.1103/PhysRevLett.119.107001 journal journal Phys. Rev. Lett. volume 119, pages 107001 (year 2017)NoStop [Liu et al.(2017)Liu, Naumov, Hoffmann, Ashcroft, and Hemley]10.1073/pnas.1704505114 author author H. Liu, author I. I. Naumov, author R. Hoffmann, author N. W. Ashcroft, and author R. J. Hemley, https://doi.org/10.1073/pnas.1704505114 journal journal Proceedings of the National Academy of Sciences volume 114, pages 6990 (year 2017)NoStop [Geballe et al.(2018)Geballe, Liu, Mishra, Ahart, Somayazulu, Meng, Baldini, and Hemley]10.1002/anie.201709970 author author Z. M. Geballe, author H. Liu, author A. K. Mishra, author M. Ahart, author M. Somayazulu, author Y. Meng, author M. Baldini, and author R. J. Hemley, https://doi.org/https://doi.org/10.1002/anie.201709970 journal journal Angewandte Chemie International Edition volume 57, pages 688 (year 2018)NoStop [Somayazulu et al.(2019)Somayazulu, Ahart, Mishra, Geballe, Baldini, Meng, Struzhkin, and Hemley]PhysRevLett.122.027001 author author M. Somayazulu, author M. Ahart, author A. K. Mishra, author Z. M. Geballe, author M. Baldini, author Y. Meng, author V. V. Struzhkin, and author R. J. Hemley, https://doi.org/10.1103/PhysRevLett.122.027001 journal journal Phys. Rev. Lett. volume 122, pages 027001 (year 2019)NoStop [Drozdov et al.(2019)Drozdov, Kong, Minkov, Besedin, Kuzovnikov, Mozaffari, Balicas, Balakirev, Graf, Prakapenka, Greenberg, Knyazev, Tkacz, and Eremets]Drozdov2019 author author A. P. Drozdov, author P. P. Kong, author V. S. Minkov, author S. P. Besedin, author M. A. Kuzovnikov, author S. Mozaffari, author L. Balicas, author F. F. Balakirev, author D. E. Graf, author V. B. Prakapenka, author E. Greenberg, author D. A. Knyazev, author M. Tkacz, and author M. I. Eremets, https://doi.org/10.1038/s41586-019-1201-8 journal journal Nature volume 569, pages 528 (year 2019)NoStop [Podryabinkin et al.(2019)Podryabinkin, Tikhonov, Shapeev, and Oganov]PhysRevB.99.064114 author author E. V. Podryabinkin, author E. V. Tikhonov, author A. V. Shapeev, and author A. R. Oganov, https://doi.org/10.1103/PhysRevB.99.064114 journal journal Phys. Rev. B volume 99, pages 064114 (year 2019)NoStop [Pickard(2022)]PhysRevB.106.014102 author author C. J. Pickard, https://doi.org/10.1103/PhysRevB.106.014102 journal journal Phys. Rev. B volume 106, pages 014102 (year 2022)NoStop [Ishikawa et al.(2024)Ishikawa, Tanaka, and Tsuneyuki]PhysRevB.109.094106 author author T. Ishikawa, author Y. Tanaka, and author S. Tsuneyuki, https://doi.org/10.1103/PhysRevB.109.094106 journal journal Phys. Rev. B volume 109, pages 094106 (year 2024)NoStop [Deem and Newsam(1992)]Deem1992 author author M. W. Deem and author J. M. Newsam, https://doi.org/10.1021/ja00044a035 journal journal Journal of the American Chemical Society volume 114, pages 7189 (year 1992)NoStop [Falcioni and Deem(1999)]10.1063/1.477812 author author M. Falcioni and author M. W. Deem, https://doi.org/10.1063/1.477812 journal journal The Journal of Chemical Physics volume 110, pages 1754 (year 1999)NoStop [Putz et al.(1999)Putz, Schön, and Jansen]Putz:zm0055 author author H. Putz, author J. C. Schön, and author M. Jansen, https://doi.org/10.1107/S0021889899006615 journal journal Journal of Applied Crystallography volume 32, pages 864 (year 1999)NoStop [Lanning et al.(2000)Lanning, Habershon, Harris, Johnston, Kariuki, Tedesco, and Turner]LANNING2000296 author author O. J. Lanning, author S. Habershon, author K. D. Harris, author R. L. Johnston, author B. M. Kariuki, author E. Tedesco, and author G. W. Turner, https://doi.org/https://doi.org/10.1016/S0009-2614(99)01366-4 journal journal Chemical Physics Letters volume 317, pages 296 (year 2000)NoStop [Coelho(2000)]Coelho:ks0007 author author A. A. Coelho, https://doi.org/10.1107/S002188980000248X journal journal Journal of Applied Crystallography volume 33, pages 899 (year 2000)NoStop [Tsujimoto et al.(2018)Tsujimoto, Adachi, Akashi, Todo, and Tsuneyuki]PhysRevMaterials.2.053801 author author N. Tsujimoto, author D. Adachi, author R. Akashi, author S. Todo, and author S. Tsuneyuki, https://doi.org/10.1103/PhysRevMaterials.2.053801 journal journal Phys. Rev. Mater. volume 2, pages 053801 (year 2018)NoStop [Yoshikawa et al.(2022)Yoshikawa, Sato, Akashi, Todo, and Tsuneyuki]10.1063/5.0125553 author author S. Yoshikawa, author R. Sato, author R. Akashi, author S. Todo, and author S. Tsuneyuki, https://doi.org/10.1063/5.0125553 journal journal The Journal of Chemical Physics volume 157, pages 224112 (year 2022)NoStop [Zhao et al.(2023)Zhao, Sato, and Tsuneyuki]ZHAO2023122028 author author Y. Zhao, author R. Sato, and author S. Tsuneyuki, https://doi.org/https://doi.org/10.1016/j.jnoncrysol.2022.122028 journal journal Journal of Non-Crystalline Solids volume 600, pages 122028 (year 2023)NoStop [Pedregosa et al.(2011)Pedregosa, Varoquaux, Gramfort, Michel, Thirion, Grisel, Blondel, Prettenhofer, Weiss, Dubourg, Vanderplas, Passos, Cournapeau, Brucher, Perrot, and Duchesnay]scikit-learn author author F. Pedregosa, author G. Varoquaux, author A. Gramfort, author V. Michel, author B. Thirion, author O. Grisel, author M. Blondel, author P. Prettenhofer, author R. Weiss, author V. Dubourg, author J. Vanderplas, author A. Passos, author D. Cournapeau, author M. Brucher, author M. Perrot, and author E. Duchesnay, @noop journal journal Journal of Machine Learning Research volume 12, pages 2825 (year 2011)NoStop [Plimpton(1995)]PLIMPTON19951 author author S. Plimpton, https://doi.org/https://doi.org/10.1006/jcph.1995.1039 journal journal Journal of Computational Physics volume 117, pages 1 (year 1995)NoStop [Thompson et al.(2022)Thompson, Aktulga, Berger, Bolintineanu, Brown, Crozier, in 't Veld, Kohlmeyer, Moore, Nguyen, Shan, Stevens, Tranchida, Trott, and Plimpton]THOMPSON2022108171 author author A. P. Thompson, author H. M. Aktulga, author R. Berger, author D. S. Bolintineanu, author W. M. Brown, author P. S. Crozier, author P. J. in 't Veld, author A. Kohlmeyer, author S. G. Moore, author T. D. Nguyen, author R. Shan, author M. J. Stevens, author J. Tranchida, author C. Trott, and author S. J. Plimpton, https://doi.org/https://doi.org/10.1016/j.cpc.2021.108171 journal journal Computer Physics Communications volume 271, pages 108171 (year 2022)NoStop [Los and Fasolino(2003)]PhysRevB.68.024107 author author J. H. Los and author A. Fasolino, https://doi.org/10.1103/PhysRevB.68.024107 journal journal Phys. Rev. B volume 68, pages 024107 (year 2003)NoStop [Tsuneyuki et al.(1988)Tsuneyuki, Tsukada, Aoki, and Matsui]PhysRevLett.61.869 author author S. Tsuneyuki, author M. Tsukada, author H. Aoki, and author Y. Matsui, https://doi.org/10.1103/PhysRevLett.61.869 journal journal Phys. Rev. Lett. volume 61, pages 869 (year 1988)NoStop [Woodcock(1971)]WOODCOCK1971257 author author L. Woodcock, https://doi.org/https://doi.org/10.1016/0009-2614(71)80281-6 journal journal Chemical Physics Letters volume 10, pages 257 (year 1971)NoStop [Fayos(1999)]FAYOS1999278 author author J. Fayos, https://doi.org/https://doi.org/10.1006/jssc.1999.8448 journal journal Journal of Solid State Chemistry volume 148, pages 278 (year 1999)NoStop [Levien et al.(1980)Levien, Prewitt, and Weidner]Levien author author L. Levien, author C. T. Prewitt, and author D. J. Weidner, @noop journal journal American Mineralogist volume 65, pages 920 (year 1980)NoStop [Peacor(1973)]+1973+274+298 author author D. R. Peacor, https://doi.org/doi:10.1524/zkri.1973.138.1-4.274 journal journal Zeitschrift für Kristallographie - Crystalline Materials volume 138, pages 274 (year 1973)NoStop [Stukowski(2009)]Stukowski_2010 author author A. Stukowski, https://doi.org/10.1088/0965-0393/18/1/015012 journal journal Modelling and Simulation in Materials Science and Engineering volume 18, pages 015012 (year 2009)NoStop
http://arxiv.org/abs/2405.09902v1
20240516084905
Unveiling the Potential: Harnessing Deep Metric Learning to Circumvent Video Streaming Encryption
[ "Arwin Gansekoele", "Tycho Bot", "Rob van der Mei", "Sandjai Bhulai", "Mark Hoogendoorn" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.CR" ]
Unveiling the Potential: Harnessing Deep Metric Learning to Circumvent Video Streaming Encryption Arwin Gansekoele12, Tycho Bot3, Rob van der Mei12, Sandjai Bhulai2 and Mark Hoogendoorn2 1Centrum Wiskunde & Informatica, Amsterdam, the Netherlands 2Vrije Universiteit, Amsterdam, the Netherlands 3Ministry of Defence, The Hague, the Netherlands Email: arwin.gansekoele@cwi.nl May 20, 2024 ========================================================================================================================================================================================================================================================================================= Encryption on the internet with the shift to HTTPS has been an important step to improve the privacy of internet users. However, there is an increasing body of work about extracting information from encrypted internet traffic without having to decrypt it. Such attacks bypass security guarantees assumed to be given by HTTPS and thus need to be understood. Prior works showed that the variable bitrates of video streams are sufficient to identify which video someone is watching. These works generally have to make trade-offs in aspects such as accuracy, scalability, robustness, etc. These trade-offs complicate the practical use of these attacks. To that end, we propose a deep metric learning framework based on the triplet loss method. Through this framework, we achieve robust, generalizable, scalable and transferable encrypted video stream detection. First, the triplet loss is better able to deal with video streams not seen during training. Second, our approach can accurately classify videos not seen during training. Third, we show that our method scales well to a dataset of over 1000 videos. Finally, we show that a model trained on video streams over Chrome can also classify streams over Firefox. Our results suggest that this side-channel attack is more broadly applicable than originally thought. We provide our code alongside a diverse and up-to-date dataset for future research. deep learning, one-shot learning, encryption, video streaming § INTRODUCTION The Internet has become an indispensable part of everyday life in modern society. Being able to interact with family and friends in the blink of an eye, irrespective of their location is among the greatest accomplishments of the past century. A vital component in the pipeline to sharing information over the internet is the Hypertext Transfer Protocol (HTTP). HTTP was not designed initially with any security features, however. If a third party wanted to see the content of someone’s actions on the internet, it would be reasonably simple to do so. Such a third party could even pretend to be one of the communicating parties, a man-in-the-middle (MITM) attack. With that risk in mind, a secure version of HTTP called HyperText Transfer Protocol Secure (HTTPS) was proposed. This version added encryption and verification to the existing HTTP protocol. This transition had major implications for automated packet inspection methods. Without encryption, connections could be inspected automatically using connection parameters to identify the traffic, e.g., video, audio, chat etc.<cit.>. This transition is thus great news for the end user as encryption gives a strong guarantee of privacy. However, it also complicates the work of internet service providers (ISPs), as well as giving cyber criminals more leeway to go undetected. Clearly, there are parties that would benefit from bypassing HTTPS. It is thus important to understand the weaknesses in the HTTPS protocol to be able to patch them. To this end, a new form of side-channel attacks has been researched recently <cit.>. While it is nearly impossible to decrypt the content of an interaction, the interaction itself still contains information. One instance of such connections is video streaming. The majority of video streams are transmitted using either the HTTP live streaming (HLS) or dynamic adaptive streaming over HTTP (MPEG-DASH) protocol. These enable HTTP(S) servers to deliver video content adaptively based on users’ preferences and internet availability. As such, they can identify the network conditions of users on the fly and send segments of appropriate quality. However, multiple works <cit.> have shown that these protocols leak information such that a third party could identify which video is being watched when matched against a set of known videos. These range from approaches based on network statistics <cit.>, manual feature extraction with traditional machine learning (ML) methods<cit.> and deep neural networks (DNNs)<cit.>. We find the latter two of these approaches to have their own strengths and weaknesses. A traditional ML approach such as the k-nearest neighbor (kNN) algorithm is cheap to extend with new classes but is not as accurate as DNNs. In turn, DNNs are expensive to retrain. Furthermore, no work has yet proposed a method to deal with out-of-distribution (OOD) data. This is data not seen during training but encountered at test time. Finally, no learning-based method is currently capable of transferring across different settings. Browsers can have different implementations of the MPEG-DASH controller, for example, which means that the same video watched on different browsers can result in entirely different streams. To address these issues, we propose a triplet loss approach with an extension we call outlier leveraging (OL). Our approach combines the strengths of both classes of methods while also being better able to deal with OOD data. More specifically, we make the following contributions. * Robust. We show that triplet loss models are significantly more robust in the presence of OOD video streams than current state-of-the-art models. The addition of OL further improves robustness when OOD streams are dissimilar from the streams trained on. * Generalizable. We find that the triplet loss model allows the inclusion of new classes without retraining the model. We empirically show that our method achieves this with high accuracy. * Scalable. We demonstrate that our method scales well when the number of videos to detect is large and the number of streams available is small. * Transferable. We show that our method transfers well across settings. A model trained on video streams from Chrome is also useful for classifying video streams from Firefox. In practice, our work suggests the possibility of bypassing encryption on a large scale for video streaming. § RELATED WORK Through the challenges defined above, we identify three related areas of research. The main area of research is encrypted streaming content detection, the identification of which videos correspond to which encrypted video streams. §.§ Encrypted Video Stream Detection Many works have already shown that it is possible to identify the nature of encrypted network traffic <cit.>. While it was known that it is possible to determine the nature of traffic, Dubin et al. <cit.> were among the first to show that the connection is sufficiently distinct to determine which video is being watched. Gu et al. <cit.> created a novel fingerprinting algorithm to identify variable bit-rate video streams. Wu et al. <cit.> proposed a similar method but aligned the sequences instead of treating them as sets and applying ML. Reed and Klimkowski <cit.> demonstrated that it is possible to construct fingerprints of Netflix shows statistically purely by looking at the segment sizes and bitrates. In follow-up work, Reed and Kranch <cit.> showed that an update by Netflix did little to patch this vulnerability. Dias et al. <cit.> proposed a video streaming classifier based on the Naive Bayes method. Schuster et al. <cit.> were among the first to use DNNs for this problem, specifically convolution neural networks (CNNs). Bae et al. <cit.> demonstrated the effectiveness of this attack in LTE networks. Wu et al. <cit.> showed that it is also possible to identify the resolution of a video stream. We build on this body of work through our deep metric learning approach. §.§ Out-of-Distribution Detection and Open Set Recognition Our work also touches on the field of out-of-distribution detection (or open-set recognition). This ML field addresses the issues of models face in the presence of data they were not initially trained on. Hendrycks and Gimpel <cit.> introduced a baseline model with an evaluation suite for out-of-distribution detection. Lee et al. <cit.> opted to use additional synthetic out-of-distribution samples as training input. This method worked well for synthetic out-of-distribution samples, but natural images proved problematic. Hendrycks et al. <cit.> proposed the outlier exposure method, where they fine-tuned a classifier using a large dataset and showed it improves OOD detection. These works have not been applied to encrypted video streams and form the basis for the new outlier leveraging method we propose. §.§ Deep Metric Learning A weakness of the kNN approach is that applying a distance metric to compare features is not necessarily informative. Deep metric learning is an approach where neural networks are used to learn a feature transformation such that the transformed features have an informative distance metric. The most common deep metric learning approach is the triplet loss<cit.>. While originally thought not to be as effective as classification losses, Hermans et al. <cit.> set guidelines on how to train triplet loss models and showed that they can be more effective than classification losses. Sirinam et al. <cit.> approached the problem of web fingerprinting using a triplet loss-based network. They achieved promising results in adapting to new websites. Wang et al. <cit.> expanded on this by including adversarial domain adaptation to better transfer knowledge from a source dataset to a target dataset. We use these works to devise our own triplet loss-based method. § METHODOLOGY We now explain our method to address the challenges described earlier. Recall that, in our case, we wish to determine which video was watched based on the stream we observed. A stream in this case is a time series of how many bits were received at every point in time. Every time a video is streamed, the bits received and the times at which they are received differ based on factors such as the browser or network conditions. As the stream is encrypted, we assume it is not possible to determine the video based on the content. In fact, it is important to note that the patterns captured in this side-channel attack are not based on the video content but on its MPEG-DASH segmentation. The same video hosted on different platforms would result in dissimilar streams if different parameters are used. The variance between different streams of the same video also makes it difficult to establish exact matches. That is why we use multiple different streams as samples and their corresponding videos as class labels to create a classification model. Our approach uses the triplet loss, which fulfils the same purpose but operates differently. §.§ Triplet Loss The triplet loss is a metric learning loss approach used to teach a model to predict the similarity between samples. A neural network learns to map samples onto a metric space where samples from the same class are closer together than samples from different classes. In our case, it means that it positions streams from the same video closer together. The triplet loss is computed over a set of triplets 𝕋 where each triplet is a combination of three unique streams: an anchor *a, a positive *p and a negative *n. The anchor is a stream that belongs to the same video as the positive stream, whereas the negative stream is a stream of a different video. Intuitively, the negative stream should be further away from the anchor stream than the positive stream. A margin parameter m is normally included to define how much further away a negative stream should be compared to the positive stream. The goal of the triplet loss is thus to teach a neural network f_θ parameterized by parameters θ to transform some stream x^(i) into an embedding f_θ(x^(i)) such that any embedding f_θ(x^(j)) of the same video lies closer to this embedding than the embeddings of streams of different videos. The 'closeness' can be defined through any differentiable distance metric. We have opted for the Euclidean distance as it is often the most effective <cit.>. We define the formula for the triplet loss in Equation <ref>. This metric learning approach differs fundamentally from classification approaches <cit.>. Classification approaches teach a model of the similarity of a stream to a pre-defined video. They are directly optimised to determine which video stream belongs to which videos. Classification approaches are thus often more accurate, as they are optimised for that purpose. A metric learning approach teaches a model which streams are similar but not explicitly which videos they belong to. A major advantage of such an approach is that the resulting model is not dependent on which videos it sees during training. It can classify any arbitrary video, which has made this approach popular in other fields for n-shot learning. ℒ_triplet(f_θ ; 𝕋; m) = 1/|𝕋|∑_(a,p,n)∈𝕋max{0, m + ||f_θ(*x^(a)) - f_θ(*x^(p))||_2 - ||f_θ(*x^(a)) - f_θ(*x^(n))||_2}. We perform our triplet selection in an online fashion similar to <cit.>. This means that we first generate the embeddings from a randomly sampled balanced batch. A balanced batch entails having a subset of all classes and sampling an equal amount of samples for each of those classes. After generating the embeddings, we perform triplet selection and compute the loss within this batch. To select the triplets, we use two strategies. In the first ten epochs, we use a semi-hard negative strategy. In this strategy, we randomly select for each positive-anchor pair a random negative such that the triplet is semi-hard. A semi-hard triplet is a triplet where the distance of the anchor to the negative is larger than to the positive, but the difference is not larger than the margin parameter yet. This strategy is easy to optimize and helps avoid model collapse. After the tenth epoch, we switch to the hardest negative strategy. In this strategy, a set of triplets is formed by selecting the negative closest to the anchor for each positive-anchor pair in a batch. We found this strategy the most effective during tuning. §.§ Inference While the triplet loss defines a training procedure, we still need to define how to perform classification with the trained model. There are multiple approaches possible here, but we have opted for an approach similar to <cit.>. We take the mean embedding for every class, dubbed the centroid, and treat the distance of the centroid to new data points as the class-conditional score. We define the set of embeddings ℤ_k belonging to class k given some (training) dataset 𝕏 in Equation <ref>. ℤ_k = {f_θ(*x)|(*x, y) ∈𝕏 , y = k }. We can then use these embeddings to compute the centroid *c_k for class k using Equation <ref> below. *c_k = 1/|ℤ_k|∑_f_θ(*x) ∈ℤ_k f_θ(*x). Using centroids is an intuitive solution, as the centroid is the point that minimizes the distance to every point in ℤ_k. Given the definition in Equation <ref>, we define the video-conditional score of some stream *x given model f_θ with respect to video k in Equation <ref> below. s(*x; f_θ, k) = exp(-||*c_k - f_θ(*x)||_2)/∑_j=1^Kexp(-||*c_j - f_θ(*x)||_2). This corresponds to the softmax over the negative distances of f_θ(x) to all centroids. We perform prediction by taking the argmax of this score with respect to video k as defined in Equation <ref> below. prediction(*x; f_θ) = argmax_j ∈{1,…,K} s(*x; f_θ, j). We can also use the score function as a measure of confidence for the model’s prediction. A confidence measure is important when considering the OOD component of our problem. By applying a threshold to such a measure, we can determine whether a prediction is correct or not. We have defined this confidence measure in Equation <ref> below. confidence(*x; f_θ) = max_j∈{1, …, K} s(*x; f_θ, j). §.§ Outlier Leveraging Like the standard cross-entropy loss, the triplet loss is not generally trained for an open-world setting. A third party may only be interested in a handful of videos and network conditions but needs to handle them regardless. That is why we also introduce a generalisation (OL) of the triplet loss function that allows incorporating streams of OOD videos during training. The loss function ℒ_OL is similar to Equation <ref> but instead of sampling negatives from other classes, they are sampled from an extra tuning dataset. Intuitively, we now also maximize the distance between the OOD streams and the anchors by using triplet loss. We combine the losses in Equation <ref>. ℒ = ℒ_triplet + λℒ_OL. The idea behind using two separate loss functions is to keep the sampling process of video streams we wish to distinguish from each other (ℒ_triplet) separate from the sampling process of video streams we want to reject (ℒ_OL). This ensures that in every training step, the model is guaranteed to concurrently learn to be discriminative and robust. § DATA COLLECTION To test our method, we first gather data for our experiments. We opted to gather our own data as opposed to using datasets from prior work. We mainly do so as web technology moves quickly. Results from datasets from years ago may no longer represent the current state of affairs. For example, Google has been increasingly pushing the QUIC standard, so the dataset we gathered using Google Chrome only contains QUIC streams. An added benefit is that it gives us full control of our experimental setup and thus allowed us to construct a large, and diverse video streaming dataset. For our data collection, we consider one of the simplest scenarios: a third party has access to the line and can observe all incoming and outgoing traffic. This scenario applies to ISPs and government agencies. This scenario is likely generalizable to other scenarios, e.g., when considering a side-channel attack <cit.> or when performing over-the-air captures <cit.>. To collect the data, we first gathered the video IDs we would like to observe using two approaches. The first approach consisted of scraping 20 videos from the YouTube trending page like <cit.>. The second approach entailed sampling videos pseudo-randomly from YouTube. We did so by taking a random string of size 4 and querying 50 videos using the YouTube API. For each of the collected videos, we gathered samples as follows. We opened a Selenium browser instance that opens the link to the video. We kept the browser open for 60 seconds and recorded the network traffic using tcpdump. All browser instances used AdBlock Plus to handle advertisements. We filtered out the video component by looking at the DNS requests made and then filtering on the IP address that corresponds to the video component. This is possible, as most DNS requests are not yet encrypted. In case DNS requests are not available, picking connections that transmit a lot of data over a large window of time is an effective alternative. Given a connection, we created our features as follows. We initialized a vector of size 240 where each element refers to a bucket with a timespan of 1/4 second in the total recording of 60 seconds. For every 1/4 second time window, we recorded the number of bytes that were transmitted during that time. After completion, we standardized each sample individually. Standardization can be helpful when dealing with different-quality video streams. We also found it to perform better than normalization. We repeated this procedure for both the incoming and outgoing packets, resulting in the final 2× 240 bivariate feature vector. The gathered datasets are as follows. * D_small: A dataset consisting of 20 different videos with 100 streams per video. This dataset was collected in a manner similar to <cit.>; we took 20 videos from the trending page and streamed all of them using a Chrome browser. We use this dataset to evaluate whether our method is performant in a setting known to be 'solved.' For every video, we captured 80 streams for training and 20 streams for evaluation. * D_large and D_firefox: Two larger datasets which comprise of 1087 different videos with 10 streams and 4 streams each. We gathered them using the pseudo-random approach described earlier. We recorded them using a Chrome and Firefox browser instance respectively. We intended to capture as many different videos as we could with a small number of samples per video to evaluate the scalability of our methods. We split the data in a balanced manner with the proportions 8/2 and 3/1 respectively. * D_tune and D_out: Two datasets comprising 1000 and 8000 different videos with 1 stream each. This dataset is comprised of different videos than in D_large and D_firefox. The former is used for the OL tuning and the latter for the OOD evaluation. The streaming process was done using a Chrome browser. § EXPERIMENTAL SETUP After defining the methods and data we use for our experiments, we discuss what experiments we have performed to evaluate these as well as the settings we have used to run our experiments. §.§ Architecture We have opted for a CNN with three blocks and a fully connected hidden layer. Each block consists of two 1D convolutional filters of size 7 followed by a ReLU activation function and batch normalization<cit.>. At the end of the block, max pooling is used to reduce the size of the feature map for the first two blocks with a global average pooling block used for the last block. The number of channels for the three blocks is 128, 256, and 512 respectively. A one-layer feed-forward neural network (FFNN) with a hidden layer size of 1024 is used as the classification head. A helpful feature of a CNN is its translation invariance. This helps our network with aligning sequences, which means that our method is not limited to classifying only the start of a video. §.§ Hyperparameter Selection To optimize our network, we used the Adam <cit.> optimizer with decoupled weight decay regularization <cit.>. We used this optimizer alongside a learning rate of 3× 10^-4 and a weight decay of 0.01. We found that the models are sufficiently regularised and that tuning does not result in drastically better models. A benefit of using default hyperparameters is that our results should be straightforward to reproduce. We note the hyperparameters for the models used to classify D_small and D_large in Table <ref>. §.§ Benchmark Models We evaluated our method empirically alongside two benchmark models. The first benchmark is a kNN approach applied to the input features. While similar to the approach in <cit.>, we found that their approach is not as effective as their original method when unable to filter out TLS retransmissions. We found that simply applying kNN to the input features yields surprisingly good results and thus used that approach. The kNN approach provides a lower bound for our method, as it has a trivial training time and is thus able to include new videos on the fly. We select the number of neighbors through 5-fold cross-validation. The optimal number of neighbors for the kNN approaches for all our experiments was 1. A second benchmark we include is the convolutional neural network approach proposed in <cit.>. As this is the most accurate approach we are aware of, it is an obvious benchmark. We maintain a similar number of parameters for this benchmark compared to our method to ensure a fair comparison. For the sampling strategy, we do not have to perform triplet mining but instead sample batches of size 128 at random. We refer to the cross-entropy model as CNN in later sections. We can compute the scores and prediction for the benchmark model by taking the max and argmax respectively of the final scores outputted by the CNN. §.§ Experiments To evaluate our experiments, we use three metrics. First, we use accuracy computed over the videos we wish to classify. This metric tells how well the model can distinguish videos of interest. Second, we compute the mAP (mean average precision) between correctly classified videos and videos from D_out. We pose this as a binary classification problem where we wish to detect the correctly classified videos. We use the confidence score function in equation <ref> to determine the ranking. Third, we compute the recall at 100% precision alongside the mAP. Both metrics together give an overview of how well the method performs in the presence of out-of-distribution data. We then performed four experiments to evaluate our four challenges. * Robustness. In these experiments, we evaluate how robust our models are when evaluated on D_small and D_out. We train all models and compare them using the metrics as described in this section. We also investigate the effect of OL on performance. * Generalisability. In these experiments, we evaluate how well our triplet-loss approach performs if we add videos that were not seen during training. * Scalability. In these experiments, we include the dataset D_large to evaluate whether the conclusions we found in experiments 1 and 2 apply in more challenging settings. * Transferability. In these experiments, we evaluate whether our models trained on Chrome can be used for Firefox data as well. § RESULTS AND DISCUSSION Having defined the setup we use to evaluate our methods, we discuss the results we gathered to answer our research question. §.§ Robustness We thus first evaluated the robustness of our methods compared to the kNN and CNN baselines described earlier. To do so, we evaluated our method on how well it classifies the streams in D_small, as well as whether it can distinguish D_small from D_out. Otherwise, we used the setup as described in the Experimental Setup and report the result of this evaluation in Table <ref>. First, we found that using kNN alone is sufficient to achieve an accuracy above 99%. This indicates, first of all, that kNN is sufficient for a small and clean dataset of encrypted video streams to achieve a high test accuracy. However, an mAP of 5% indicates that the kNN algorithm is unable to cope with new video streams; it is not robust. Based on this finding, we opted to experiment with the number of neighbors beyond the results of cross-validation. We found that increasing the number of neighbors can improve the robustness quite drastically. However, it also reduces the accuracy slightly. Consequently, to get an accurate and robust kNN algorithm, it would be necessary to tune using out-of-distribution data. However, both the CNN and triplet loss models do not require such tuning to achieve much better open-world scores. Both the CNN and triplet loss models respectively represent big steps in terms of open-world detection performance. The CNN model already achieved an mAP of 76%. Intuitively, the mAP is a form of summary statistic to measure the average precision over a set of thresholds based on the recall. Using a neural network seems to benefit the robustness of the model amidst new video streams, especially as it does not require out-of-distribution data for tuning. It was known that the CNN model is more effective than the kNN approach. Interestingly, the triplet loss model is quite a bit more robust than the CNN. The mAP of 98% indicates that it can retrieve most of the videos we want to classify without making errors. Concretely, the recall at 100% precision metric indicates that it can identify 59% correctly without making any errors. The triplet loss model has access to the same data as the CNN and, unlike the kNN model, has a similar number of parameters as the CNN. The triplet loss function seems to force the backbone to extract more information from the training data. It seems to get penalized much more for being uncertain as opposed to the cross-entropy loss used by the CNN benchmark. Consequently, it is better able to recognize which video streams belong to one of the known videos and which do not. Using the triplet loss function thus results in more robust models in an open-world setting. §.§ Generalisability We have thus shown that on the small dataset, our method is at least as accurate while being more robust than previously known methods. In theory, it also provides the same flexibility of including new videos on the fly. To test this, we randomly selected and left out 2 of the videos in D_small, and trained on the remaining 18 videos. We demonstrate what the impact is of incorporating n shots with varying numbers for n in Figure <ref>. Interestingly, adding more shots strictly improves the performance of the model despite not using any retraining steps. When having access to the full 80 shots, the classification of the newly added videos is almost as accurate as the triplet loss model, and more accurate than kNN. This suggests that our model may be able to not only detect OOD data but classify them as well. §.§ Scalability The dataset D_small is fairly small and we cannot say much yet about whether our results would generalize to larger datasets. It can be argued that the dataset is also fairly simple, given how kNN is sufficient for a test accuracy of over 99%. That is why we have opted to gather the larger dataset D_large described earlier. This dataset consists of many different videos but few streams per video, as opposed to a few videos with many streams per video such as in D_small. This dataset gives us a better idea of what an attacker would be able to achieve at scale. We used the larger neural network architectures described in the experimental setup for these experiments. We quickly found that the small architectures were insufficient to properly learn the patterns in the data. We first evaluate the robustness again in Table <ref>. We found a few things interesting here. First, a significant gap opens up between the kNN approach and the neural network approaches. The best kNN model is unable to find a good class separation. On the other hand, while the neural network-based methods required more parameters to learn, they all achieved respectable accuracy scores of around 86%. They were also much more robust than the kNN approach. Again, we also found that the triplet loss-based approach is more robust than the CNN. However, adding outlier leveraging to the triplet loss function did not make the model more robust and even made it slightly worse. The collection strategy of D_large was the same as D_tune, so adding data using outlier leveraging only likely helps when the extra data brings additional information on the out-of-distribution data to expect at inference. Another explanation for the gap might be that OL is primarily useful when the original dataset contains only a small number of distinct videos. Using OL in such a situation provides the model with the diversity it does not have in the original dataset. Given the disparity in accuracy between kNN and the triplet loss model on D_large, we performed the same experiment as in Section 6.2. We report the results of this in Figure <ref>. Again, we opted to leave out 10% of the videos and vary the number of shots we use. Interestingly, adding more data strictly improved the accuracy of the n-shot videos. If we use 8 shots to construct our centroids, we can achieve as high as 81% accuracy. This is significantly higher than the kNN approach while requiring almost no compute time as we do not retrain the model. Thus, our approach can be an interesting option for quickly including new videos. An interesting question is how far this would scale in minimal data setups. In Figure <ref>, we vary how much data we leave out and use 1 shot for each video added at inference. What is interesting here is that the accuracy is still significantly higher than the kNN approach. When training with a leave-out fraction of 0.5 (a 544/543 training/inference split for video classes), the accuracy is only 2-3% lower than when training with a fraction of 0.1 (a 978/109 training/inference split for video classes). This implies that our model can potentially incorporate arbitrarily many new videos if we have a sufficiently strong base model. This makes our approach highly scalable. §.§ Transferability As we have shown that our model can generalize to videos not seen during training, we wondered whether this extends to different settings as well. To do so, we have gathered the dataset D_firefox. This dataset consists of the same videos as D_large but all streams were collected using the Firefox browser, which has a different implementation of the streaming controller. Thus, a model trained on Chrome is likely not transferable to Firefox. As the classification is done using centroids, we could swap the centroids of a model trained on Chrome with the centroids generated over Firefox data. To evaluate this hypothesis, we first took the original models we trained on D_large. For the 0-shot setting, we use the original eight Chrome streams from D_large to form the centroids at inference time. We dub this setting 0-shot, as we do not use any streams from the Firefox setting. We use this setting as a baseline to evaluate how well a model trained on Chrome transfers to Firefox. For the 1-shot and 3-shot settings, we took our model trained on Chrome streams but evaluated it using Firefox centroids. The centroids of the 1-shot and 3-shot settings used one or three streams respectively. These settings demonstrate how well a model trained on Chrome streams transfers to classifying Firefox streams. Finally, we trained and evaluated our models on the Firefox data to serve as a benchmark. We report our results in Table <ref>. As expected, there is a transfer gap between Chrome and Firefox. Hence the setting without any shots performs poorly. What is surprising is the performance of the 1-shot and 3-shot transfer settings for the triplet loss model. They are both more accurate and robust than training a kNN approach entirely on Firefox data. This demonstrates that the model has learned patterns from the Chrome traffic that also generalize well to Firefox streams. Furthermore, while the accuracy of 29% might be too low for a successful attack, it is clear from the model trained on Firefox that this setting is challenging. Nonetheless, this indicates that it is possible to train a model on streams from Chrome and use it for Firefox data as well. Chrome to Firefox is just one case of conversion, however. Our method is likely transferable to other settings as well. The implication of this is powerful; a sufficiently complex base model might be able to perform this side-channel attack for any arbitrary network setting. § CONCLUSION We set out to devise a side-channel attack for encrypted video streams that satisfies four properties. The attack should be robust, generalisable, scalable and transferable. Using a triplet loss-based approach alongside an extension we call outlier leveraging, we were able to address each of these issues. Our attack either beats the state-of-the-art or introduces new possibilities for the attack. First, the triplet loss approach makes the least mistakes when dealing with out-of-distribution data. Second, it generalises well to new videos without requiring retraining. Third, these conclusions hold when the number of videos becomes large. Finally, a single model is sufficient to classify both Chrome and Firefox streams and is thus likely sufficient for any arbitrary network condition. Interesting future work would be to look further into the transferability our method permits and how to improve on it. Defences against this attack would also be a valuable angle. The practical implication of our work is that large-scale monitoring of video streaming content over HTTPS is likely possible. Quick solutions to alleviate the vulnerability include adjusting MPEG-DASH encoding settings to make burst patterns less identifiable, maintaining different encodings on different servers, and regularly regenerating encodings for existing videos. In the longer term, we hope that our work helps provide urgency for embedding preventative measures in the HTTPS and/or MPEG-DASH protocols to guarantee the privacy of internet users. IEEEtran
http://arxiv.org/abs/2405.08855v1
20240514180000
Extreme Nuclear Transients Resulting from the Tidal Disruption of Intermediate Mass Stars
[ "Jason T. Hinkle", "Benjamin J. Shappee", "Katie Auchettl", "Christopher S. Kochanek", "Jack M. M. Neustadt", "Abigail Polin", "Jay Strader", "Thomas W. -S. Holoien", "Mark E. Huber", "Michael A. Tucker", "Christopher Ashall", "Thomas de Jaeger", "Dhvanil D. Desai", "Aaron Do", "Willem B. Hoogendam", "Anna V. Payne" ]
astro-ph.HE
[ "astro-ph.HE" ]
0.0cm 0.2cm 16cm 21cm 1.0cm sciabstract lastnote scilastnote lastnote+1 lastnote. 24pt Extreme Nuclear Transients Resulting from the Tidal Disruption of Intermediate Mass Stars Jason T. Hinkle,^1∗ Benjamin J. Shappee,^1 Katie Auchettl,^2,3 Christopher S. Kochanek,^4,5 Jack M. M. Neustadt,^4 Abigail Polin,^6,7,8 Jay Strader,^9 Thomas W.-S. Holoien,^6 Mark E. Huber,^1 Michael A. Tucker,^4,5 Christopher Ashall,^10 Thomas de Jaeger,^11 Dhvanil D. Desai,^1 Aaron Do,^12 Willem B. Hoogendam,^1 Anna V. Payne^13 ^1Institute for Astronomy, University of Hawai`i, 2680 Woodlawn Drive, Honolulu, HI 96822, USA ^2School of Physics, The University of Melbourne, Parkville, VIC 3010, Australia ^3Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064, USA ^4Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210, USA ^5Center for Cosmology and Astroparticle Physics, The Ohio State University, 191 W. Woodruff Avenue, Columbus, OH 43210, USA ^6The Observatories of the Carnegie Institution for Science, 813 Santa Barbara St., Pasadena, CA 91101, USA ^7TAPIR, Walter Burke Institute for Theoretical Physics, 350-17, Caltech, Pasadena, CA 91125, USA ^8Department of Physics and Astronomy, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907, USA ^9Center for Data Intensive and Time Domain Astronomy, Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA ^10Department of Physics, Virginia Tech, Blacksburg, VA 24061, USA ^11CNRS/IN2P3 (Sorbonne Université, Université Paris Cité), Laboratoire de Physique Nucléaire et de Hautes Énergies, 75005 Paris, France ^12Institute of Astronomy and Kavli Institute for Cosmology, Madingley Road, Cambridge, CB3 0HA, UK ^13Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA ^∗To whom correspondence should be addressed; E-mail: jhinkle6@hawaii.edu. ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Modern transient surveys now routinely discover flares resulting from tidal disruption events (TDEs) which occur when stars, typically ∼0.52 M_⊙, are ripped apart after passing too close to a supermassive black hole. We present three examples of a new class of extreme nuclear transients (ENTs) that we interpret as the tidal disruption of intermediate mass (∼310 M_⊙) stars. Each is coincident with their host-galaxy nucleus and exhibits a smooth (<10% excess variability), luminous (27×10^45 erg s^-1), and long-lived (>150 days) flare. ENTs are extremely rare (≥1×10^-3 Gpc^-1 yr^-1) compared to any other known class of transients. They are at least twice as energetic (0.52.5× 10^53 erg) as any other known transient and these extreme energetics rule out stellar origins. Accretion onto supermassive black holes (SMBHs) powers many of the most luminous events in the universe. At a redshift of z ≈ 1, roughly 10% of SMBHs are actively accreting mass (e.g., <cit.>) and are observed as active galactic nuclei (AGNs). AGN light curves commonly show stochastic variability at a broad range of timescales from minutes to years (e.g., <cit.>), with some AGNs showing long-term photometric trends often accompanied by dramatic changes in their spectra (e.g.,<cit.>). AGNs can also, albeit rarely, exhibit large, coherent flares <cit.>, although the physical mechanisms for powering them are unclear. With the recent growth of optical transient surveys, several classes of flares coincident with the nuclei of their host galaxies have been detected. These include tidal disruption events (TDEs; <cit.>), rapid turn-on AGNs <cit.>, and ambiguous nuclear transients (ANTs; <cit.>). Accretion-powered transients share several key observational properties including bright ultraviolet (UV) emission <cit.>, strong emission lines <cit.>, and often X-ray emission <cit.>. The smooth flares of nuclear transients on several-month timescales <cit.> are distinct from the stochastic variability typical of AGNs. A TDE results from the disruption of a star as it passes too close to a SMBH (e.g., <cit.>). Most observed TDEs appear consistent with the disruption of a main-sequence star with a mass of ∼0.5 2 M_⊙ <cit.>, although there is significant scatter in these estimates <cit.>. Nevertheless, characteristics like enhanced N/C ratios <cit.> suggest a population of TDEs resulting from more massive stars. The host galaxies of TDEs typically do not host a strong AGN, although this is largely a result of selection effects. Recently, an increasing number of TDE candidates have been discovered for which their host galaxies exhibit signs of weak AGN activity (e.g., <cit.>). Owing to the establishment of long-baseline all-sky surveys <cit.>, we are now sensitive to rare and unexpected classes of transients. One such survey, Gaia Alerts, uses the Gaia spacecraft to monitor the transient sky at approximately monthly cadence with high precision. From the Gaia Alerts <cit.> transient stream we selected a sample of flares with three primary characteristics: (1) large amplitudes of ≥ 1 mag, (2) smooth light curves with <10% excess variability about the flare evolution, and (3) a long timescale of ≥ 1 year. Gaia is ideal for such a search as it has observed the full sky since late 2014 and, as a space-based mission, it typically has shorter seasonal breaks than ground-based surveys. Our search yielded two transients, Gaia16aaw (AT2016dbs) and Gaia18cdj (AT2018fbb). We combine these events with the recently published object ZTF20abrbeie (AT2021lwx; <cit.>) as a sample of events we will refer to as extreme nuclear transients (ENTs). The observed properties of the ENTs are reminiscent of extreme versions of ANTs, which are transients occurring in an AGN host galaxy. The light curves of the ENTs, shown in Figure <ref>, each exhibit a long (≥ 100 day) rise to a high peak luminosity. The ENTs decline slowly after peak, taking more than 150 days to fade to half of their peak luminosity. The ENTs detected prior to the flare show tentative signs of pre-flare variability, suggesting weak AGN activity within their host galaxies. After the UV/optical emission peaks, the ENTs show an IR excess, indicative of transient heating of circumnuclear dust and re-emission at longer wavelengths. Much like the ENT hosts, the host galaxies of AGNs typically have significant nuclear dust. The ENTs Gaia16aaw and Gaia18cdj are located within 0.8 kpc of their host-galaxy centers, confirming that they are nuclear transients. Their nuclear origin, long timescales, and high peak luminosities are immediately suggestive of a transient resulting from accretion onto a SMBH. We cannot measure the host offset of AT2021lwx as it has no detected host galaxy prior to the flare, at levels of >-21 absolute mag in the rest-frame blue bands. Nevertheless, the similar timescales and peak luminosities of AT2021lwx indicate that it is also powered by accretion, consistent with previous studies. The ENTs are located at a relatively high redshift of z ≈ 1, as measured from optical and near-IR follow-up spectra. Using the stellar population synthesis and AGN model of the Code Investigating GALaxy Emission (CIGALE; <cit.>), we find that the host galaxies of Gaia16aaw and Gaia18cdj each have a stellar mass of ≈ 9 × 10^10 M_⊙ and star-formation rates (SFRs) of ≈ 75 - 110 M_⊙ yr^-1. While undetected, the luminosity limits for the host galaxy of AT2021lwx combined with a conservative mass-to-light ratio of 3 yields a mass upper-limit of M ≤ 1 × 10^11 M_⊙. The upper limit on the [O II] emission means that the SFR for AT2021lwx is <4 M_⊙ yr^-1. From typical galaxy-SMBH scaling relations <cit.>, the stellar masses imply SMBH masses of 10^8.4 M_⊙ for Gaia16aaw and Gaia18cdj and an upper limit on the mass of < 10^8.5 M_⊙ for AT2021lwx, which are more massive SMBHs than in the majority of known nuclear transient hosts. The detected ENT hosts are more massive and display higher SFRs than the host galaxies of local nuclear transients such as TDEs and ANTs. Furthermore, the detected ENT host-galaxy masses are within the top few percent of stellar masses at z = 1, when the universe was half its current age. Thus, the inferred SMBH masses are similarly extreme at this redshift. In contrast, the prodigious SFRs of the hosts of Gaia16aaw and Gaia18cdj are only moderately high at z = 1 when the star formation density in the universe was a factor of 6 higher than today <cit.>. The rest-frame spectra of the ENTs, shown in Figure <ref>, exhibit blue spectra with broad lines from the Balmer series of hydrogen and singly-ionized magnesium (Mg II). These are similar to the comparison spectra of the luminous nuclear transients ASASSN-15lh <cit.>, AT2019brs <cit.>, ASASSN-17jz <cit.>, PS1-10adi <cit.>, ASASSN-18jd <cit.>, PS16dtm <cit.>, and AT2019dsg <cit.> also shown in Fig. <ref>. The persistent blue continua and broad lines are inconsistent with known classes of supernovae, but fully consistent with SMBH accretion. The Mg II and Hα emission of the ENTs is broad (∼5000 10000 km s^-1) and luminous ((0.5 5) × 10^43 erg s^-1), very similar to AGNs. While Mg II emission has not been seen for TDEs, the Hα emission is consistent with TDEs if the lines are equally as overluminous as the broadband emission. As the ENTs are likely powered by accretion, it is important to consider the presence of previous AGN activity. Through a combination of the CIGALE fits, WISE IR colors, narrow emission lines, and X-ray emission we find evidence of a strong AGN in Gaia16aaw. Gaia18cdj likely hosts a weak AGN, but does not host a strong quasar. The limits derived from the pre-flare properties of AT2021lwx are consistent with a weak AGN, and rule out a strong quasar. The fact that some ENT host galaxies do not host a strong AGN suggests that prior strong AGN activity is not a requirement to power an ENT. The ENT spectral energy distributions (SEDs) are well-fit by a blackbody model, which is consistent with the super-Eddington accretion expected from TDEs. In contrast, AGNs with broad lines typically exhibit power-law-like SEDs as a result of viewing the accretion disk directly. The resulting bolometric light curves are shown in Figure <ref>, along with the comparison objects from Fig. <ref>. The ENTs are extremely luminous, with peak luminosities from (2 7) × 10^45 erg s^-1. This is ∼1000 times more luminous than typical core-collapse supernovae (SNe), ∼100 times the average Type Ia SN peak luminosity, and ∼30 times more luminous than typical SLSNe-I. Only ASASSN-15lh, suggested to either be a superluminous supernovae (SLSNe; <cit.>) or TDE <cit.>, rivals these high peak luminosities. The blackbody properties of the ENTs are also consistent with some form of accretion onto a SMBH rather than an exotic class of SN. The temperatures of the ENTs are hot at ∼ 1.5 × 10^4 K and show little or very slow evolution during the flare. This is inconsistent with SLSNe, which quickly cool as the ejecta expands. However, this behavior is similar to TDEs and ANTs, with a remarkable agreement between the ANT and ENT blackbody temperatures. The large effective radii of the ENTs is consistent with SLSNe and some ANTs, although the decreasing blackbody radii in time is more typical of TDEs and ANTs. The light curve decay timescale of the ENTs is also far longer than most transients, which is consistent with TDEs occurring on massive SMBHs. The rest-frame durations for the flares to fade by half are (171 ± 15) days for Gaia16aaw, (155 ± 10) days for Gaia18cdj, and (210 ± 20) days for AT2021lwx, respectively. Figure <ref> shows the position of these ENTs in the parameter space of peak absolute magnitude and characteristic timescale. The ENTs stand out in this parameter space for being very luminous and long-lived. ASASSN-15lh, which has a similar peak luminosity to the ENTs, decays more quickly, with timescale of (57 ± 10) days. The position of the ENTs in the upper right corner of this space indicates a high total radiated energy. The radiated energies of the ENTs are (5.2 ± 0.2) × 10^52 erg for Gaia16aaw, (2.5 ± 0.5) × 10^53 erg for Gaia18cdj, and (2.2 ± 0.2) × 10^53 erg for AT2021lwx. These extreme radiated energies correspond to high accreted masses of 0.3 1.4 M_⊙ for a typical 10% accretion efficiency, far greater than typical TDEs and ANTs. The ENT flares are at least twice as energetic as the next most energetic known flares, PS1-10adi <cit.> and ASASSN-15lh <cit.>, and up to an order of magnitude higher in the cases of Gaia18cdj and AT2021lwx. While the estimates for accreted mass are extreme compared to local transients, they are fully consistent with the tidal disruption of an intermediate-mass (∼3-10 M_⊙) star. The dust properties of ENTs are very similar to other SMBH accretion-powered transients. To study the environment directly surrounding the SMBHs, we used NEOWISE data to probe the emission from hot dust as it reprocesses the intense UV/optical emission from the flare. By fitting the WISE IR SEDs as blackbodies, we find peak luminosities of ∼ (0.3 3) × 10^45 erg s^-1, temperatures of ∼ 1500 3000 K and radii of ∼ 0.05 0.15 pc, all consistent with hot dust in nuclear environments <cit.>. From the ratio of the peak IR luminosity to the peak UV/optical luminosity we estimate dust covering fractions of ∼ 0.2 0.4, which we confirm with models of the optical and IR light curves (see the supplementary text). These covering fractions are consistent with AGN dust covering fractions at similar SMBH masses as well as dust-obscured TDE candidates <cit.>. This indicates the presence of significant dense gas and dust near the SMBH, which likely supports the existence of AGN activity, whether weak or in the past, in each of the ENT hosts. Accretion-powered transients often show X-ray emission, as do two of the three ENTs in our sample. Gaia16aaw and AT2021lwx both show X-rays at levels of (0.3 1) × 10^45 erg s^-1 in the 0.3–10 keV band throughout the flare, similar to luminous AGNs <cit.>. Gaia18cdj is undetected in the X-rays at <4 × 10^44 erg s^-1. While significantly more luminous, the X-ray to UV/optical ratios of the ENTs are broadly similar to TDEs and ANTs as well as within the typical range of AGNs (see the supplementary text), again supporting an accretion-based origin for these events. The rate of luminous, long-lived, accretion-powered events like these ENTs can also be used to understand their potential physical origins. The high redshift and low observed number of ENTs is immediately a sign of a low intrinsic rate. From the 3 detected ENTs, their peak absolute magnitudes, and the survey parameters for Gaia and ZTF, we estimate a lower limit on the rate of ≳ 1 × 10^-3 Gpc^-3 yr^-1. This implies that ENTs are roughly ten thousand times less common than SLSNe and TDEs at z ≈ 1 <cit.>. With the observed properties of the ENT flares and their estimated rates, we consider potential physical models and plausible origins. First, we examine strong gravitational lensing, which can significantly magnify transient events (e.g., <cit.>). The strong constraints on the lack of a foreground lens galaxy, even when considering the effects of magnification bias <cit.>, and the high required magnifications (>10 for normal SNe) make this a remote possibility. Typical radioactively-powered SNe are ruled out based on the unphysically high (>300 M_⊙) ^56Ni masses that they would require. There are several luminous classes of SNe <cit.>, including those powered by magnetar spin-down and interactions with circumstellar material (CSM). A magnetar-powered event is ruled out as the most energetic ENT would require a neutron star spinning at breakup to be 5.5 M_⊙, even assuming 100% efficiency. This mass is well above the Tolman–Oppenheimer–Volkoff upper limit on the mass of a neutron star (e.g., <cit.>). CSM interactions are also ruled out as they predominantly produce narrow emission lines, which are not seen for all of the ENTs, and the CSM masses required are of order 1000 M_⊙. Thus, no stellar transient can be responsible for the ENTs. As there is evidence for AGN activity, albeit typically weak, in most of the ENT host galaxies, an AGN origin for the flares must be considered. From studies on quasar variability, a flare resulting from an extreme stochastic variability event is ruled out <cit.>. Another class of transients requiring interaction between an AGN disk wind and the broad line region clouds <cit.> is unlikely given the high required masses and the sub-Eddington pre-flare accretion rates. Finally, we find it unlikely that instabilities within an AGN disk cause the ENTs due to the similar accreted masses and timescales of the events despite a large range in pre-flare Eddington ratios of the AGNs within the ENT host galaxies. The most plausible physical scenario for these ENTs is the tidal disruption of an intermediate-mass star and the subsequent return of material onto the SMBH. The high masses of the SMBHs naturally provide long-duration flares consistent with the ENT timescales, as the flare timescale scales as M_BH^1/2. As roughly half of the disrupted stellar mass in a TDE leaves the system, the total radiated energies provide a lower limit on the stellar masses of ≳3 M_⊙. The timescales and luminosities for the disruption of ∼3 10 M_⊙ stars match the ENT observables well. Scaling from the known TDE rate and assuming that the TDE rate is proportional to the number of stars and their stellar lifetimes, we estimate the rate of 3 10 M_⊙ TDEs at z ≃ 1 to be ≈ 1.5 × 10^-2 Gpc^-3 yr^-1, in good agreement with the estimated ENT rate. As the ENT rate is formally a lower limit, we note that top-heavy stellar initial mass functions <cit.> and lower disrupted stellar masses can both increase the expected rates. Given the natural explanation of smooth flares, the compatible timescale and luminosities, and several multi-wavelength similarities we propose that the ENTs are the product of the tidal disruption of intermediate-mass stars. These events represent the upper bound of accretion-powered transients to date. For analogs at higher redshift, these ENTs will be an unparalleled window into transient accretion in the early universe given their extreme luminosities. High redshift ENTs will simultaneously probe both the high-mass end of the stellar IMF and SMBH mass distribution in the early universe. Similar events will be visible to the Large Survey of Space and Time (LSST; <cit.>) on the Vera Rubin Observatory out to a redshift of z ∼ 2 - 3, although the rates may drop given the declining SMBH number density <cit.>. Future IR monitoring of the sky from surveys such as the Roman Space Telescope <cit.> will capture the redshifted rest-frame UV light from these events out to even higher redshifts of z ∼ 4 - 6. With already 3 well-studied examples from comparatively shallower surveys, ENTs are poised as an ideal beacon to guide our way towards a more complete understanding of the extremes of transient events in the universe. Science § ACKNOWLEDGEMENT We thank Christopher Storfer and John Tonry for their helpful discussions. We thank Phillip Wiseman for sharing the published spectra of AT2021lwx. Funding: J.T.H. and B.J.S. are supported by NASA grant 80NSSC23K1431. K.A. is supported by the Australian Research Council Discovery Early Career Researcher Award (DECRA) through project number DE230101069. C.S.K. is supported by NSF grants AST-1907570 and 2307385. C.A. acknowledges support by NASA grants JWSTGO-02114, JWST-GO-02122, JWST-GO-03726, JWSTGO-04436, and JWST-GO-04522. A.D. is supported by National Science Foundation grant AST-1911074. W.B.H. is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 2236415. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Author contributions: J.T.H. led the analysis and drafting the manuscript. B.J.S., C.S.K., and C.A. helped prepare the manuscript. K.A. reduced and analyzed the X-ray data. C.S.K. and J.M.M.N. conducted modeling of the IR lags. A.P., J.S., and T.W.-S.H. obtained follow-up spectroscopy of the sources. J.T.H, B.J.S., C.S.K., and J.S. contributed to the physical interpretation. J.T.H, M.E.H., M.A.T., C.A., T.dJ., D.D.D., A.D., W.B.H., and A.V.P. contributed to the SCAT vetting of sources. Competing interests: The authors declare no competing interests. This work presents results from the European Space Agency (ESA) space mission Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is https://www.cosmos.esa.int/gaia. The Gaia archive website is https://archives.esac.esa.int/gaia. This work is also based in part on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grant No. AST-2034437 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, and IN2P3, France. Operations are conducted by COO, IPAC, and UW. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. The CSS survey is funded by the National Aeronautics and Space Administration under Grant No. NNG05GF22G issued through the Science Mission Directorate Near-Earth Objects Observations Program. The CRTS survey is supported by the U.S. National Science Foundation under grants AST-0909182. This project used public archival data from the Dark Energy Survey (DES). Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana–Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da Ciência, Tecnologia e Inovação, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Enérgeticas, Medioambientales y Tecnológicas–Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenössische Technische Hochschule (ETH) Zürich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciències de l’Espai (IEEC/CSIC), the Institut de Física d’Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universität München and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, The Ohio State University, the OzDES Membership Consortium, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University. Based in part on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. This work uses data from Pan-STARRS. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. This work uses data from the University of Hawaii's ATLAS project, funded through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575, with contributions from the Queen's University Belfast, STScI, the South African Astronomical Observatory, and the Millennium Institute of Astrophysics, Chile. Based in part on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia e Inovações do Brasil (MCTI/LNA), the US National Science Foundation’s NOIRLab, the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU). This paper includes data gathered with the 6.5-meter Magellan Telescopes located at Las Campanas Observatory, Chile. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. We acknowledge the use of public data from the Swift data archive. This research has made use of data obtained from the Chandra Data Archive and the Chandra Source Catalog, and software provided by the Chandra X-ray Center (CXC) in the application packages CIAO and Sherpa. Based in part on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. This work is based in part on observations made by ASAS-SN, ATLAS, Pan-STARRS, and Keck. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summits of Haleakalā and Maunakea have always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from these mountains. This research was supported in part by grant NSF PHY-2309135 to the Kavli Institute for Theoretical Physics (KITP). Supplementary Materials This PDF file includes: Supplementary Text Figures S1 through S9 Table S1 through S3 References 55 through 198 § OBSERVATIONAL DATA *Sample Selection. We select our initial sample of extreme nuclear transients (ENTs) from the Gaia Alerts <cit.> transient stream. Our criteria for inclusion were designed to select smooth, luminous, and long-lived events. We had five selection criteria: (1) a flare of ≥ 1 mag over the pre-flare baseline, (2) an observed timescale of that flare of ≥ 1 year, (3) a flare that was smooth as defined by having a monotonic flare profile and no strong short-term variability (<10% excess variability) during the flare, (4) a flare with a peak luminosity of >10^45 erg s^-1, and (5) a source without radio and/or gamma-ray detections that would suggest a jetted AGN. This selection resulted in two sources: Gaia16aaw and Gaia18cdj. When implementing our search criteria we supplemented the Gaia photometry with archival photometry from the All-Sky Automated Survey for Supernovae (ASAS-SN; <cit.>), Asteroid Terrestrial-impact Last Alert System (ATLAS; <cit.>), the Catalina Real-Time Transient Survey (CRTS; <cit.>), and the Zwicky Transient Facility (ZTF; <cit.>). We also obtained optical spectra with the UH 2.2-m telescope and the Spectral Classification of Astronomical Transients (SCAT; <cit.>) to vet several other smooth nuclear transients which were ultimately rejected because their redshifts indicated lower peak luminosities not sufficient to meet criterion (4). We finally used the Fermi Large Area Telescope Catalog <cit.> to search for gamma-ray detections and Vizier <cit.> to search several radio catalogs across the sky. We additionally find that the known source ZTF20abrbeie (AT2021lwx) <cit.> meets the above criteria to be considered an ENT, although it was not triggered on by the Gaia Alerts team. We suspect that this may be due to a bright nearby star 16.0" away which is ≈5.2 mag brighter than ZTF20abrbeie at peak. Since ZTF20abrbeie passes our selection criteria, we include it in our sample of ENTs. Thus, the three transients we study in detail are Gaia16aaw, Gaia18cdj, and ZTF20abrbeie. Gaia16aaw, (α,δ) = (04:11:57.000, -42:05:30.80), was discovered on 2016 January 23.5 by the Gaia Alerts team <cit.>. The discovery was announced publicly on the Transient Name Server (TNS) and given the identification AT2016dbs[<https://www.wis-tns.org/object/2016dbs>] <cit.>. Gaia18cdj, (α,δ) = (02:09:48.140, -42:04:37.02), was discovered on 2018 August 12.1 by the Gaia Alerts team <cit.> and announced on TNS with the identification AT2018fbb[<https://www.wis-tns.org/object/2018fbb>] <cit.>. ZTF20abrbeie, (α,δ) = (21:13:48.408, +27:25:50.48), was discovered on 2021 April 13.5 by ZTF. It was announced to TNS with the identification AT2021lwx[<https://www.wis-tns.org/object/2021lwx>] <cit.>. Color images created using aplpy <cit.> from gri images taken by either the Dark Energy Survey (DES; for Gaia16aaw and Gaia18cdj) <cit.> or the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; for AT2021lwx) <cit.> are shown in Figure <ref>. As ZTF20abrbeie has had previous papers published on its evolution under the name AT2021lwx <cit.>, we will continue to use that name in this manuscript to avoid confusion. For the two Gaia sources, we elect to use their survey names in this manuscript. *Archival and Transient Photometry. We first searched for available archival photometry to constrain the evolution of these objects prior to the ENT flare. For Gaia16aaw and Gaia18cdj, we obtained V-band photometry from the CRTS. As the typical signal-to-noise ratio (S/N) per CRTS epoch was low, we binned these data in monthly bins to search for pre-flare variability. These data are shown in Figure <ref> as tan squares. The sources are weakly detected and show no significant evidence for strong variability prior to the flares. To constrain both the pre-flare characteristics of the objects and their behavior during the flare, we also obtained photometry from the Wide-field Infrared Survey Explorer (WISE; <cit.>). This includes data taken during both the AllWISE <cit.> and NEOWISE <cit.> portions of the WISE mission. We have focused on the W1 (3.4 μ m) and W2 (4.6 μ m) bands as they span the full lifetime of the WISE mission. Gaia16aaw and Gaia18cdj are well-detected in the NEOWISE single exposure catalog and we therefore construct their W1 and W2 light curves by binning the individual exposures within a given epoch. For AT2021lwx, there is no persistent source detected in the NEOWISE single exposure catalog, consistent with the apparent lack of a host galaxy in Pan-STARRS imaging <cit.>. We therefore performed aperture photometry on the NEOWISE images to obtain our WISE light curves. To avoid contamination by nearby, bright stars, we used a 4" radius aperture with an aperture correction estimated from the computed WISE curve of growth[https://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec4_4c.html]. The local background was estimated using a sigma-clipped median within an annulus around the target. Despite the small source aperture, the W1 band retained a low level of residual flux from the nearby, bright star. To mitigate this, we subtracted the mean pre-flare flux from the light curves and added the pre-flare scatter in quadrature with the estimated flux uncertainties for each epoch. The WISE light curves are shown in Figure <ref> as olive and brown diamonds for W1 and W2 respectively. Given the lack of a pre-flare detection, we only show NEOWISE data for AT2021lwx. DES <cit.> imaging covered the locations of Gaia16aaw and Gaia18cdj. These images span from late 2013 to late 2018 and cover these locations both prior to and during the ENT. We created DES light curves in the griz bands by performing aperture photometry with a 3" radius aperture, subtracting the local background estimated through a sigma-clipped median in an annulus centered on the source, and calibrating to nearby stars with catalog photometry from ATLAS Refcat2 <cit.>. We elected to use a relatively large aperture to ensure that the entire galaxy was contained within the aperture. The DES data are shown in Figure <ref> as circles, with the color corresponding to the filter. For each ENT we also obtained ATLAS light curves from their forced point-spread function photometry service. This yielded light curves in the `cyan' (c, 4200–6500 Å) and `orange' (o, 5600–8200 Å) filters <cit.>. To ensure more robust detections, we stacked the ATLAS data in monthly bins throughout the flares. The ATLAS photometry is plotted in Figure <ref> as cyan and orange pentagons. Each of the sources was observed by the Neil Gehrels Swift Gamma-ray Burst Mission (Swift; <cit.>) at least once during its evolution. These observations utilized both the UltraViolet and Optical Telescope (UVOT; <cit.>) and X-ray Telescope (XRT; <cit.>). We combined all UVOT images taken in a given filter per epoch using the HEASoft uvotimsum package and used the uvotsource package to extract source and background fluxes from these coadded images. For Gaia16aaw (PI: Hinkle) and Gaia18cdj (PI: Hinkle) we used the default UVOT aperture with a radius of 5" and background regions with radii of 50" and 40", respectively. For AT2021lwx (PI: Wang), there is a star of comparable brightness within the default 5" aperture and so we used a smaller 3" aperture to capture only the source flux. As AT2021lwx is relatively close to the Galactic plane and is therefore in a moderately crowded field, we used a background ellipse with an effective area of ≈27" chosen to avoid contamination from nearby stars. Finally, we acquired light curves from the discovering survey for each source. For Gaia16aaw and Gaia18cdj, this was G-band photometry from Gaia, obtained through the Gaia Alerts service. We estimated the uncertainties on this photometry following <cit.> and binned visits within 5 days for subsequent analyses. The Gaia light curves are shown in Figure <ref> as green hexagons. For AT2021lwx, we procured ZTF photometry in the g and r bands from the Alerce broker <cit.>. During the rise and through peak, we stacked the data 5-day bins, switching to monthly bins after the last seasonal break to better sample the decline. These data are shown in Figure <ref> as teal and red octagons. *Offset from Host-Galaxy Center. To constrain the offset of the ENTs from the center of their host galaxies, we aligned the images and then measured the offset between the positions before and during the transient. We elected to measure the relative position before and during transient emission rather than absolute positions to avoid uncertainty in the distortion terms that going to a full world coordinate solution would induce. We first used a modified version of the ISIS image subtraction package <cit.> interpolation function which matches sources identified by sextractor <cit.> to align the DES images at the location of Gaia16aaw and Gaia18cdj. We can not measure the offset for AT2021lwx as the host galaxy was not detected in any archival images. For the Gaia sources, we first retrieved the DES images from the NOIRlab image servers. Because the image cutouts are not aligned, we first did a rough alignment of all retrieved cutouts including all DES filters. We then removed image cutouts with too small an area to have a sufficient number of alignment stars, images where the source was near the edge, and images where the interpolation did not converge. We then trimmed the images to the intersecting area and re-interpolated the images. This left 20 images for Gaia16aaw and 8 images for Gaia18cdj. Next, we used photutils <cit.> to determine the centroid of the host galaxy before the transient and then again when each transient was near peak. For Gaia16aaw, the DES images that pass our cuts are from Dec 2013, Jan 2014, Jan 2016, Feb 2016, Nov 2016, Dec 2016, Dec 2017, and Jan 2018. The available Gaia light curve begins in Oct 2014 and it appears that the host+transient emission may already be brighter than the host alone because it was ∼ 0.3 mag brighter in 2014 than in 2023, indicating that the source was already on the rise. Thus we use the 8 DES images that passed our cuts from Dec 2013 and Jan 2014 for our host galaxy image without the transient flux. The light curve of Gaia16aaw peaks on 2016 March 28 and fades by only ∼ 0.3 mag by Dec 2016. We use the 9 DES images from 2016 to centroid the transient as these are dominated by the transient emission. We average the centroids from the host galaxy and transient epochs and use the standard deviation of these measurements as an estimate of the statistical error. Finally, we use 10 stars of comparable brightness to Gaia16aaw and centroid those stars in all our interpolated DES images. We then take the median of the standard deviation of each star's positions as an estimate of our systematic uncertainty. This yielded an offset from the host galaxy's center of 0.083" ± 0.065"_stat± 0.072"_sys which corresponds to a physical offset of 0.68 ± 0.80 kpc at the distance of Gaia16aaw. We then repeat this process for Gaia18cdj. Gaia18cdj has DES epochs from Oct 2013, Jan 2014, Oct 2016, Nov 2016, Nov 2018, and Dec 2018. The light curve of Gaia18cdj peaks on 2018 October 24. There is no detectable additional transient flux from Nov 2016 and earlier so we use the 5 DES images that pass our cuts during these times for the host galaxy and the 3 DES images in 2018 to centroid the transient position. We then follow a similar procedure to Gaia16aaw including 10 comparison stars. This yielded an offset from the host galaxy's center of 0.032" ± 0.048"_stat± 0.058"_sys which corresponds to a physical distance offset of 0.25 ± 0.60 kpc at the distance of Gaia18cdj. Thus the positions of both Gaia16aaw and Gaia18cdj are both consistent with the center of their host galaxies. We verified this result with full ISIS image subtraction using the DES g-band images before the transient to construct a transient-free reference image and then subtract it from images including the transient emission. We centroided on the reference image and the subtracted image and find that they agree to a similar precision. *Follow-up Spectroscopy. After identification of the two Gaia sources as ENTs, we obtained follow-up spectra. For Gaia16aaw, we obtained an optical spectrum on MJD 59210.1 (854 rest-frame days after peak) using the Inamori-Magellan Areal Camera and Spectrograph (IMACS; <cit.>) on the 6.5-m Magellan Baade telescope. For Gaia18cdj, we obtained an optical spectrum on MJD 59168.2 (388 rest-frame days after peak) using the Goodman High Throughput Spectrograph <cit.> on the Southern Astrophysical Research telescope. These spectra were reduced and calibrated with standard IRAF <cit.> procedures such as bias subtraction, flat-fielding, 1-D spectroscopic extraction, and wavelength calibration. For AT2021lwx, we obtained data for the first observer-frame optical spectra presented in <cit.> from the Keck Observatory Archive and reduced it using PypeIt <cit.>. The flux calibrations were initially performed using standard star spectra and then scaled to match concurrent Gaia G photometry for Gaia16aaw and Gaia18cdj and ZTF r-band photometry for AT2021lwx. These spectra are shown in Figure <ref>. We additionally obtained the follow-up optical spectra from <cit.> and <cit.> for AT2021lwx, and again used ZTF r-band photometry to refine the flux calibration. From these spectra, we can estimate the redshifts of the sources. For AT2021lwx, we confirm the redshift of z = 0.995 <cit.>. For Gaia18cdj there is a clear Mg II absorption doublet that places the source at z = 0.93747. The fact that there is Mg II emission at the same redshift confirms that this absorption feature is not a foreground absorber. The spectrum of Gaia16aaw does not exhibit any strong narrow emission or absorption features, and the redder portions of the spectrum are noisy due to night sky lines and the faint magnitude of the source at the time the spectrum was taken (G = 20.7 AB mag). However, there is a strong broad feature that we interpret as Mg II, placing the source at z = 1.03. Our identification of this feature as Mg II is supported by the broad Fe feature just blueward of the Mg II feature. Such a feature is seen in the spectra of both Gaia18cdj and Gaia16aaw. In addition to our optical spectra, we obtained a near-infrared (NIR) spectrum of Gaia18cdj with the Folded port InfraRed Echellette (FIRE; <cit.>) in Prism Mode. We reduced this spectrum using PypeIt <cit.>, wavelength calibrating our data using an arc lamp, flux calibrating the extracted spectrum with a nearby A0V star, and doing telluric calibration with the poly model within PypeIt. For AT2021lwx, we obtained the NIR spectrum presented in <cit.>. § HOST-GALAXY PROPERTIES *Stellar Mass and Star Formation Rate. The host galaxies of Gaia16aaw and Gaia18cdj are readily detected in archival surveys, so we model their properties. We used the Code Investigating GALaxy Emission (CIGALE; <cit.>) because it allows for simultaneous contributions from AGN activity and stellar emission. We fit the optical (griz) data from co-adding the pre-flare epochs of DES imaging, near-infrared (JK_s) data from the VISTA Hemisphere Survey <cit.>, and mid-infrared (W1W2) data from the AllWISE catalog <cit.>. We used a delayed star formation history, the <cit.> stellar population models, a Salpeter initial mass function <cit.>, a <cit.> extinction law with R_V = 3.1, and the SKIRTOR AGN model <cit.>. From our CIGALE fits, we find that the host galaxy of Gaia16aaw has a stellar mass of M_* = (9.5 ± 4.7) × 10^10 M_⊙, an age of (3.4 ± 1.2) Gyr, and a star formation rate of SFR = (110 ± 42) M_⊙ yr^-1. For the host of Gaia18cdj the values are M_* = (9.4 ± 1.8) × 10^10 M_⊙, an age of (2.0 ± 0.8) Gyr, and a SFR = (74 ± 23) M_⊙ yr^-1. We used synthetic photometry computed from the best-fitting host-galaxy models to subtract the host-galaxy fluxes from photometry as needed to isolate the transient flux. We further examine the SFRs of these ENT host galaxies using narrow emission lines and standard scaling relations. Gaia16aaw shows an emission feature near [O II], although there is significant contamination from night sky lines. Assuming that this line is indeed [O II], we derive a flux of 6.1 × 10^-17 erg s^-1 cm^-2. At the distance of Gaia16aaw and correcting for the A_V = 0.92 mag extinction from the CIGALE fits, this corresponds to a luminosity of 1.3 × 10^42 erg s^-1. From typical scaling relations <cit.>, such a luminosity implies a SFR of ∼ 10 20 M_⊙ yr^-1, roughly 2σ below the estimate from the CIGALE fits. Gaia18cdj does not show an [O II] emission line, so we calculated an upper-limit on the [O II] luminosity following the procedure of <cit.>. We assumed a line width of 300 km s^-1 and obtained 3σ flux limits as F(3σ) = 3C_λΔ I √(W_lineΔ X) where C_λ is the continuum flux at wavelength λ, Δ I is the RMS scatter about the normalized continuum, W_line is the width of the line profile, and Δ X is the pixel scale of the spectrum. Correcting for the extinction from the CIGALE fits, this yields a luminosity upper-limit of <4 × 10^41 erg s^-1, implying an SFR of <3 6 M_⊙ yr^-1 <cit.>. While the SFRs estimated from spectroscopic scaling relations are low relative to the estimates from CIGALE, we note (1) that these lines are in regions of the spectra with lower S/N and (2) that the SFRs estimated from the rest-frame UV absolute magnitudes of M ≈ -22.5 mag <cit.> are consistent with the CIGALE estimates. The host galaxy of AT2021lwx is undetected even in deep pre-outburst optical and IR imaging, precluding any detailed analysis of its properties. However, previous works have estimated upper limits on the stellar mass and star formation rate (SFR). From the Pan-STARRS upper limits and assuming a mass-to-light ratio of 2, <cit.> compute a mass limit of <7 × 10^10 M_⊙. Since there is considerable scatter in the mass-to-light ratio for the bluer rest-frame bands being probed by the Pan-STARRS data, up to a M/L ∼ 3 <cit.>, we will instead adopt a slightly more permissive limit of M_* <1.1 × 10^11 M_⊙ here. From limits on the [O II] luminosity in their follow-up spectra, <cit.> calculate an upper limit on the SFR of <3.7 M_⊙ yr^-1, which we confirm using deep follow-up spectra. The stellar masses and SFRs from CIGALE for the hosts of Gaia16aaw and Gaia18cdj and the estimated limits for AT2021lwx are shown in Figure <ref>. Along with these three ENTs, we show host properties for a sample of tidal disruption events <cit.> in blue and a sample of ambiguous nuclear transients <cit.> in gold. We additionally show two broader galaxy samples in gray, one local sample from SDSS <cit.> and the MPA-JHU catalog <cit.> and a sample of galaxies at z ≈ 1 from Cosmic Evolution Survey 2020 data release (COSMOS; <cit.>). We have also indicated the SFRs expected for the star-forming main sequence at z = 1 <cit.> with the dashed line. In addition to the individual source host properties, we show kernel density estimates (KDE) computed using scipy.stats.gaussian_kde and Scott's Rule to model the underlying distributions. The stellar masses and SFRs of the ENT hosts are generally higher than any of the hosts in comparison samples. As compared to the TDE sample, the stellar masses are more than an order of magnitude larger than the typical TDE host and the SFRs are several orders of magnitude higher. When compared to the ANTs, the difference is less stark in mass, although the SFRs are still much higher. Finally, as compared to the local galaxy population, the host masses are consistent with the massive end of the blue sequence, but with SFRs nearly two orders of magnitude higher than the expected SFR for this mass. However, this is expected as these galaxies reside at z ≈ 1. When compared to the location of the star-forming main sequence at this redshift as probed by the COSMOS data, the difference is less dramatic. In particular, when compared to the expected SFR <cit.> given the star-forming main sequence at the redshift and stellar mass of Gaia16aaw and Gaia18cdj, they lie 0.45 and 0.36 dex above the relation respectively. As compared to the typical dispersion about the star-forming main sequence of 0.2 - 0.5 dex <cit.>, these hosts appear high, but consistent with, a typical massive star-forming galaxy at z ≈ 1. The masses of the hosts for Gaia16aaw and Gaia18cdj are quite high for z = 1, far more massive than the typical galaxy at that redshift. Using the scaling relationship between stellar mass and SMBH mass <cit.>, with a typical scatter of ∼ 0.4 dex, we can estimate the central black hole masses. For the hosts of Gaia16aaw and Gaia18cdj, which have similar stellar masses, we find a M_BH∼ 10^8.4 M_⊙. Taking the limit on stellar mass for AT2021lwx, we find M_BH < 10^8.5 M_⊙, which is consistent with previous estimates <cit.>. *Presence of AGN Activity. Our CIGALE fits also allow us to examine the contribution of AGN activity to the pre-flare emission of the Gaia sources. For Gaia16aaw, the fits prefer an AGN luminosity of 4.6 × 10^45 erg s^-1 as compared to a stellar output of 3.3 × 10^45 erg s^-1. This suggests that Gaia16aaw hosts a relatively strong AGN, comparable to the combined stellar luminosity. For Gaia18cdj the AGN luminosity is 2.4 × 10^45 erg s^-1 and the stellar luminosity is 2.6 × 10^45 erg s^-1, consistent with a slightly weaker AGN relative to the stellar output. The fractional errors on the AGN luminosity estimates are much higher (≈ 50-100%) than the ≈ 20-25% fractional uncertainties for the stellar luminosity. We can also assess the presence of AGN activity through the WISE W1 - W2 color and associated selection criteria <cit.>. Gaia16aaw has a color of W1 - W2 = 1.03 ± 0.05 Vega mag. This is redder than the ∼ 0.8 mag threshold for local galaxies and the 0.33 mag color expected for star-forming galaxies at this redshift. However, the position of AGNs within this color space changes with redshift. For the AGN template of <cit.> at z = 1.03, the W1 - W2 color is 1.41 Vega mag, suggesting that while the host of Gaia16aaw likely hosts an AGN, it may not be dominating the MIR emission. The W1 - W2 color of Gaia18cdj is 0.49 ± 0.10 Vega mag, bluer than both the local threshold and the expected AGN color of 1.39 mag at the redshift of Gaia18cdj. It is redder than the 0.26 mag color for a typical star-forming galaxy at this redshift, supporting the likelihood that the host of Gaia18cdj hosts a relatively weak AGN. From the observed NIR (rest-frame optical) spectrum of Gaia18cdj, we measured the flux of the narrow [O III] λ5007 line by fitting it as a Gaussian and estimating errors through Monte Carlo resampling. We find a flux of (1.8 ± 0.3) × 10^-17 erg s^-1 cm^-2, with a full-width at half-maximum (FWHM) of 800 ± 150 km s^-1, and an equivalent width (EW) of (4.2 ± 0.8) Å. At the distance of Gaia18cdj, this flux corresponds to a luminosity of 8.1 × 10^40 erg s^-1. The luminosity and EW of the [O III] line are modest compared to quasars but consistent with Seyferts (e.g., <cit.>). Additionally, the high line width supports an AGN origin, although the line is only marginally resolved in the R ∼ 500 FIRE spectrum. While the host of AT2021lwx is undetected and therefore we cannot use CIGALE or typical color selections to constrain pre-flare AGN activity, the archival photometry and follow-up spectra can be used to place constraints. AT2021lwx is undetected in Pan-STARRS with g > 23.3 mag <cit.>. If we assume that the corresponding rest-frame UV emission (λ_rest≈ 2400Å) is solely tracing light from an AGN, this places a limit on the AGN luminosity of λ L_λ < 10^44 erg s^-1, ruling out a quasar. Additionally, the spectra in <cit.> and <cit.> show no signs of strong narrow emission lines from AGN activity. Using Eq. (1), we computed an upper limit on the [O III] λ5007 luminosity assuming a line width of 1000 km s^-1. This yields a 3σ upper-limit of <3.4 × 10^41 erg s^-1 which also rules out a luminous quasar but is consistent with a Seyfert (e.g., <cit.>). Finally, we can use archival X-ray data to probe pre-flare AGN activity in these galaxies. From ROSAT All-Sky Survey (RASS) data we calculate count rates for the host galaxies of our ENTs and convert them to 0.3-10 keV fluxes assuming a photon index of Γ = 2 and the Galactic foreground column density <cit.>. Gaia16aaw is detected in ROSAT, with a 0.3-10 keV luminosity of (2.3 ± 0.6) × 10^45 erg s^-1, significantly higher than the ∼ 5 × 10^41 erg s^-1 expected from the high SFR <cit.>, thus confirming the presence of an AGN. Gaia18cdj is undetected, with a 3σ upper-limit of <2.6 × 10^45 erg s^-1. Gaia18cdj is also undetected in the later Swift XRT coverage, providing a deeper limit of <3.6 × 10^44 erg s^-1. This further rules out a strong quasar, but this limit is consistent with a weaker AGN. For AT2021lwx, the limit from RASS (using Γ = 0.6; see the X-ray emission section of the supplementary text) is <1.1 × 10^46 erg s^-1, which does not constrain the pre-flare AGN activity. § CONTEXT WITHIN THE TRANSIENT LANDSCAPE With our optical and IR light curves of Gaia16aaw, Gaia18cdj, and AT2021lwx, we can compare these ENTs to other classes of transient events. In particular, the parameter space of peak absolute magnitude and the characteristic timescale of a transient often separates distinct source classes as shown in Figure <ref>. To place the ENTs on this diagram, we first computed the peak magnitudes by fitting a smooth spline to the best-sampled optical light curve. For Gaia16aaw and Gaia18cdj this was the Gaia G light curve and for AT2021lwx this was the ZTF r light curve. These filters have similar effective wavelengths and the sources lie at comparable redshifts, making this a reasonable comparison. Our uncertainties were estimated by taking the standard deviation of the peak magnitudes from 10,000 Monte Carlo iterations. Gaia16aaw peaked at an apparent magnitude of m_G = 19.37 ± 0.01 on MJD = 57476 ± 13. Gaia18cdj peaked at an apparent magnitude of m_G = 18.21 ± 0.01 on MJD = 58416 ± 9. AT2021lwx peaked at an apparent magnitude of m_r = 18.05 ± 0.03 on MJD = 59321 ± 15. The corresponding peak absolute magnitudes, accounting for foreground extinction and applying a flat K-correction of -2.5log_10(1+z), are M_G = -24.11 ± 0.01, M_G = -25.06 ± 0.01, and M_r = -25.66 ± 0.03 for Gaia16aaw, Gaia18cdj, and AT2021lwx respectively. The uncertainties are only statistical errors from the Monte Carlo procedure. We also show the source ASASSN-15lh as a comparison object given its status as a similarly overluminous transient <cit.>. From the same procedure, applied to the ASAS-SN V-band light curve, we estimate a peak absolute magnitude of M_V = -23.18 ± 0.06. Next, we calculated the characteristic timescale of the ENTs. We again fit a smooth spline to the optical light curves and define the characteristic timescale as the rest-frame time difference between the time of peak and when the source had faded to half of the peak flux. We estimated the uncertainty through Monte Carlo resampling of the flare and added this in quadrature with the uncertainty on the peak time to compute the total uncertainty. The characteristic timescales are (171 ± 15) days, (155 ± 10) days, and (210 ± 20) days for Gaia16aaw, Gaia18cdj, and AT2021lwx, respectively. For reference, the next most luminous transient, ASASSN-15lh, has a characteristic timescale estimated in the same manner of (57 ± 10) days. Figure <ref> compares the ENTs to a number of different types of transients. In blue we show various classes of supernovae, ranging from the fast and faint Calcium-Rich transients (e.g., <cit.>) to the superluminous supernovae (SLSNe; <cit.>). The green boxes show transients related to stellar mergers and/or mass transfer (e.g., <cit.>). In red shades, we show transients powered by accretion onto SMBHs, including tidal disruption events <cit.>. We have also estimated the absolute magnitude and characteristic timescale range for the growing class of ambiguous nuclear transients <cit.>. While their connection to SMBH accretion is likely, we have elected to present them in a light shade of red to indicate the uncertainty regarding their physical origin. It is clear that the ENTs studied here lie at higher peak luminosities than any other source class and are among the longest-lived. The only other transients that rival the long characteristic timescales are the core-collapse supernovae, which are ∼ 7 mag fainter, and some ANTs, which are still several magnitudes fainter at peak. The ranges shown for the various classes of transients are broadly based on <cit.>. Given the large increase in discoveries since that work, we have augmented the regions for certain classes, such as TDEs and SLSNe, to reflect the full range of properties better. Nevertheless, it is clear that these ENTs reside in a region currently unoccupied by other transient classes. There is also a class of AGN flares discovered by CRTS <cit.> that bears some resemblance to the ENTs. Many of the CRTS AGN flares are luminous and long-lived, but they typically are not as smooth, with many showing short-term variability or long-timescale re-brightening episodes. Even for the apparently smooth flares, the low S/N per CRTS epoch can easily mask variability, unlike the comparatively higher S/N Gaia and ZTF data available for the ENTs in our sample. Since we cannot confirm their smooth nature, which is one of our ENT selection criteria, we do not consider them in the remainder of the manuscript. As our sample of ENTs clusters in this parameter space rather tightly, we draw a red box in Figure <ref> as a proposed selection method for similar events. To cleanly separate the ENTs from other classes of luminous transients, we propose a threshold in peak absolute magnitude of M ≤ -24. While a separation in terms of the characteristic timescales for these events is less clear, we propose a threshold of τ≥ 125 days, which is 20% lower than for our shortest timescale ENT. Any transient meeting these criteria would emit an energy comparable to our ENTs and far more than any normal class of supernova or accretion-powered transient. § PROPERTIES OF THE FLARES *Spectral Energy Distribution. Using the well-sampled multi-wavelength light curves from DES (griz; for Gaia16aaw and Gaia18cdj) and ZTF (gr; for AT2021lwx), we can study the time evolution of the spectral energy distributions (SEDs) of these ENTs. As each of these ENTs is at a redshift of z ≈ 1, the observed griz bands correspond to the rest-frame near-UV and blue, with wavelengths of ∼ 2400Å, ∼ 3210Å, ∼ 3910Å, and ∼ 4580Å respectively. In Figure <ref>, we show the light curves from these surveys along with the corresponding color evolution. As the cadence in each band is not identical, we have interpolated the time series and calculated colors at the times corresponding to the bluer filter of the pair. In this figure, we have not removed any host-galaxy flux, so that we can directly compare the source color both prior to and during the flare. For Gaia16aaw, the source is moderately red in the first DES epoch and becomes significantly bluer by peak emission. After peak, Gaia16aaw becomes redder again, approaching the colors of the earliest DES epoch. In the case of Gaia18cdj, the pre-flare emission from the host is red, both in comparison to the first epoch of Gaia16aaw and to the near-peak DES epoch of Gaia18cdj. The redder color in quiescence is yet again suggestive of a weaker AGN component than Gaia16aaw. Near peak, the color of Gaia18cdj is similar to the other two sources, if not slightly bluer. For AT2021lwx, the ZTF photometry spans the whole flare, showing a largely monotonic change from blue to red colors as the flare progresses. The g -r color of AT2021lwx at peak is redder than either of the other two sources near their peak. We can go beyond just the colors of the sources and estimate their properties through blackbody fits to their time-evolving SEDs. Using Markov Chain Monte Carlo (MCMC) methods and a forward-modeling approach, we fit the available epochs of host-subtracted and foreground extinction-corrected multi-band photometry as a blackbody to obtain the bolometric luminosity, temperature, and effective radius for the ENTs. For Gaia16aaw, we fit three epochs of DES data, for Gaia18cdj we fit two epochs of DES data and one epoch of Swift photometry, and for AT2021lwx we fit six epochs of Swift data. For the DES data, we added a 2% uncertainty in quadrature with the photometric uncertainties before fitting. These fits are given in Table <ref>. We find that a blackbody model adequately describes the observed emission, with a median reduced χ^2 of 2.3, 0.8, and 0.7 for Gaia16aaw, Gaia18cdj, and AT2021lwx respectively. In Figure <ref> we show the results for the ENT luminosity, radius, and temperature compared to a sample of well-observed TDEs <cit.>, SLSNe-I from <cit.>, and ANTs <cit.>. In terms of luminosity, the ENTs are clearly more luminous and longer-lived, as expected given their position in Fig. <ref>. Their decline slopes are flatter than all of the TDEs and SLSNe, but similar to some of the ANTs. Two transients, ASASSN-15lh (either a SLSN or TDE; <cit.>) and AT2019brs (an ANT; <cit.>) have similar peak luminosities, although the initial decline rate for ASASSN-15lh is significantly steeper and thus the integrated energy is lower. The effective radii of our ENTs are significantly larger than the TDEs, but consistent with the SLSNe and ANT comparison samples. In terms of the radius evolution, the ENTs appear to show a slow decline in radius post-peak. This is consistent with the TDE and ANT evolution, but not with SLSNe whose effective radii increase with time. The effective temperatures of the ENTs lie between those of the SLSNe and TDE samples and are strikingly similar to the ANTs. The temperature of Gaia16aaw slowly cools after peak whereas Gaia18cdj slightly increases during the flare. Such behavior is more consistent with either TDEs or ANTs than the cool temperatures of SLSNe that drop dramatically after peak as the ejecta expand and cool (e.g., <cit.>). For each of our sources, we also created a bolometric light curve by scaling the host-subtracted Gaia G (for Gaia16aaw and Gaia18cdj) and ZTF r (for AT2021lwx) light curves to match the bolometric luminosity estimated from the blackbody fits. For epochs between blackbody fits we linearly interpolated the bolometric correction. For data outside of this range we used a flat bolometric correction corresponding to either the first or last fitted blackbody epoch. This is similar to what we have done for previous events <cit.> and gives a peak luminosity for AT2021lwx consistent with previous estimates <cit.>. Figure <ref> compares the ENT bolometric light curves to luminous nuclear transients of various classes. As in Fig. <ref> the ENTs are generally the most luminous transients, although ASASSN-15lh is more luminous than Gaia16aaw at peak. It is clear that the ENTs decay more slowly than the majority of other transients. Finally, we used the bolometric light curves to quantify the smoothness of the ENT flares. We used a Savitzky–Golay filter <cit.> with a window of 100 rest-frame days and a cubic polynomial to compute the long-term trend. We then normalized the bolometric light curves by this long-term trend and computed the RMS scatter as the characteristic fractional variability of the flare about the overall flare profile. This was 12% for Gaia16aaw, 9% for Gaia18cdj, and 6% for AT2021lwx. After subtracting the median fractional uncertainty of the bolometric light curve in quadrature (e.g., <cit.>), Gaia16aaw has an excess variability of 6% and both Gaia18cdj and AT2021lwx are consistent with noise. These confirm that the ENT flares are smooth compared to typical quasars (e.g., <cit.>). *Radiated Energy. From the bolometric light curves, we can calculate a lower limit on the radiated energy through a trapezoidal integral in time. Using numpy.trapz, we estimate radiated energies of >4.9 × 10^52 erg for Gaia16aaw, >1.5 × 10^53 erg for Gaia18cdj, and >1.9 × 10^53 erg for AT2021lwx. This estimate for AT2021lwx is consistent with earlier estimates <cit.>, especially when considering the longer temporal baseline in this study. These lower limits are higher than those for any other known optical transients. We also estimated the total radiated energy by fitting a Gaussian to the early-time rise and an exponential decline to the late-time decline. This allowed us to smoothly extrapolate to times without observational constraints using rise and decay slopes motivated by the existing data. We integrated the Gaussian fit before the first epoch of data and the exponential decline after the last epoch and added these energies to the energy computed by directly integrating the bolometric light curve. Given the high S/N data for these ENTs, the statistical uncertainties on the emitted energy from the bolometric light curve are small. Therefore, we conservatively estimate the uncertainty on the energies as half of the difference between the directly-integrated energy and the total energy from adding the fits to unobserved portions of the transient evolution. The total radiated energy for Gaia16aaw was (5.2 ± 0.2) × 10^52 erg, for Gaia18cdj it was (2.5 ± 0.5) × 10^53 erg, and for AT2021lwx the energy was (2.2 ± 0.2) × 10^53 erg. Unsurprisingly, the estimated total energies are not significantly higher than the directly-integrated energies since each of the ENTs has been observed late into their evolution. Assuming these flares are powered by accretion with an efficiency of 10%, the energies correspond to accreted masses of ≈ 0.3 1.4 M_⊙. These energies are significantly higher than other known transients. Following the same procedure, we estimate a total emitted energy of (2.4 ± 0.1) × 10^52 erg for ASASSN-15lh, less than half that of Gaia16aaw, already the least energetic of the three flares studied here. The energetic transient PS1-10adi <cit.>, with a total energy of (2.3 ± 0.5) × 10^52 erg also lies below these flares. These flares are roughly an order of magnitude more energetic than typical well-observed SLSNe (e.g., <cit.>), TDEs (e.g., <cit.>), and ANTs (e.g., <cit.>) and at least several times more energetic than the most energetic examples of each class. *Dust Covering Fraction. Each of the ENTs in our sample has good coverage of a flare in WISE data, allowing us to estimate the properties of dust in the nuclei of the host galaxies. At the redshift of these sources, the WISE W1 and W2 bandpasses observed during NEOWISE roughly correspond to the rest-frame H and K bandpasses. These data probe hot dust emission and can constrain the dust covering fraction for these host galaxies. We follow a similar procedure as <cit.>. Briefly, we subtract the pre-flare emission from the W1 and W2 light curves and fit the host-subtracted and extinction-corrected WISE photometry as a blackbody. From the luminosity evolution in the IR and UV/optical, we fit for the peak luminosity in each band. The dust covering fraction (f_c) is estimated as the ratio of the peak IR luminosity to the peak UV/optical luminosity (e.g., <cit.>). Our blackbody fits to the NEOWISE data yield dust temperatures of ∼ 1500 3000 K and effective radii of ∼ 0.05 0.15 pc with the temperatures decreasing and the radii increasing in time, consistent with previous studies of hot dust in galactic nuclei <cit.>. From the luminosity ratios, we find only a lower-limit of f_c ≥ 0.2 for Gaia16aaw, as the peak of the IR luminosity is not observed. For Gaia18cdj we find f_c = 0.22 ± 0.07 and for AT2021lwx we find f_c = 0.42 ± 0.12, where the uncertainties are estimated from Monte Carlo resampling. These covering fractions are shown in Figure <ref>, where they are compared to previously estimated covering fractions for TDEs <cit.> and ANTs <cit.>. The ENT dust covering fractions are much higher than for optically-selected TDEs, but very similar to both ANTs and the minimum covering fraction estimates for the IR-selected TDEs <cit.>. The ENT covering fractions are also remarkably consistent with the typical covering fraction of AGNs with M_BH∼ 10^8-8.5 (e.g., <cit.>). The nuclei of the ENT hosts are clearly dusty and likely harbor an AGN-like “dusty torus”, unsurprising given the existing evidence for AGN activity in the host galaxies of Gaia16aaw and Gaia18cdj. Anisotropies in the dust distribution can cause the ratio of IR to UV/optical luminosity to underestimate low covering factors and overestimate high covering factors. While the emitting geometry of these ENTs is unknown, we apply the corrections of <cit.> to examine their effects. Using Table 1 of <cit.> and assuming an aligned disk and torus with τ_9.7 μ m = 3, their minimum computed optical depth, we find covering fractions of f_c ≥ 0.42, f_c = 0.44, and f_c = 0.58 for Gaia16aaw, Gaia18cdj, and AT2021lwx respectively. While these corrections increase the covering fraction estimates for our ENTs, their position relative to the TDE and ANT samples as well as the typical AGN trend in Fig. <ref> is not dramatically different. Finally, we used JAVELIN <cit.> to compute temporal lags between the optical and IR flares, employing a top hat smoothing function. For AT2021lwx, as the source is undetected pre-flare, we added artificial optical measurements at the time of the NEOWISE observations with zero flux and uncertainties equal to the first optical flux uncertainty. This was necessary to avoid significantly negative fluxes in the model optical light curve at early times. These fits are shown in Figure <ref>. We find rest-frame lags of ≈265 days for Gaia16aaw, ≈150 days for Gaia18cdj, and ≈70 days for AT2021lwx, where these are the weighted average of the lags between the W1 and W2 bands and the optical band. The W2 lags are generally longer than the W1 lags as we would expect since the W2 emission will include contributions from cooler, more distant dust. The width of the smoothing top hats were roughly twice the mean lag, which is the lag distribution of a spherical shell. This suggests that the dust subtends a large solid angle around the transient. We can alternatively estimate the dust covering fractions as f_c ≈ (L_IRΔ t)/(τ_dust E_flare) where L_IR is the peak IR luminosity of the dust echo, Δ t is the temporal lag, τ is the dust optical depth, and E_flare is the emitted energy of the transient. Assuming τ_dust = 0.25, we find dust covering fractions of ∼0.7 for Gaia16aaw, ∼0.2 for Gaia18cdj, and ∼0.3 for AT2021lwx, each in good agreement with the estimates from the luminosity ratios. While it is not exact and depends on the dust composition in detail, the fraction of the observed optical light that consists of photons scattered by the dust should be comparable to the absorbed fraction. Hence, another argument that the overall fraction of the absorbed light must be modest is that if the mean optical depth of the dust producing the IR echoes is significant, scattered photons from the peak UV/optical emission would overproduce the tail of the light curve. As it is, it is likely that some fraction of the tail is from photons scattered off the dust. This should be a general consideration for models of mid-IR dust echoes from nuclear transients (e.g., <cit.>), although the detailed modeling of the dust radiative transfer is beyond the scope of this paper. *Emission Lines. All of the ENTs in our sample show emission lines. In our spectra of Gaia16aaw and Gaia18cdj, the most prominent line is the broad Mg II line. AT2021lwx exhibits a number of strong and relatively narrow emission lines, but also broader lines including Mg II <cit.>. As this line is common among the ENTs, we fit it as a single component Gaussian to determine the line width and luminosity. The Mg II line and the fits are shown in Figure <ref>. It is noteworthy that all three ENTs have Mg II emission, as this line is conspicuously absent from all near-UV spectra of TDEs to date (e.g., <cit.>) but ubiquitous among AGNs (e.g., <cit.>), although this may just be evidence of a pre-existing AGN-like gas reservoir. In Fig. <ref>, the ANT AT2019brs also shows weak Mg II emission but ASASSN-15lh, the other object with the requisite wavelength coverage, does not. Each of the nuclear transients in Fig. <ref> except ASASSN-15lh show broad Hα emission. The fitted FHWMs for Gaia16aaw, Gaia18cdj, and AT2021lwx are 1.0 × 10^4 km s^-1, 5.8 × 10^3 km s^-1, and 1.2 × 10^4 km s^-1. These are high, but consistent with the broadest end of the line-width distribution for AGNs across a range of luminosities (e.g., <cit.>). Our fits give integrated fluxes of 3.2 × 10^-15 erg s^-1 cm ^-2, 1.1 × 10^-15 erg s^-1 cm ^-2, and 1.4 × 10^-15 erg s^-1 cm ^-2 respectively for Gaia16aaw, Gaia18cdj, and AT2021lwx. Compared to the continuum, these fluxes correspond to EWs of 97 Å, 11 Å, and 13Å. The combination of FWHM and EW for Gaia16aaw is typical of an AGN, but the EWs for Gaia18cdj and AT2021lwx are smaller than normal given their broad FWHMs <cit.>. The line luminosities of 1.8 × 10^43 erg s^-1, 5.2 × 10^42 erg s^-1, and 7.3 × 10^42 erg s^-1 are typical of luminous AGNs <cit.>. Among our sample of ENTs, Gaia16aaw is the most spectroscopically similar to typical AGNs, while Gaia18cdj and AT2021lwx exhibit behavior unusual for a normal AGN. This may be a result of the later phase at which the spectrum of Gaia16aaw was taken. Gaia18cdj and AT2021lwx also exhibit broad Hα emission lines in their follow-up spectra. We similarly fit these broad features as a single component Gaussian. Both Gaia18cdj and AT2021lwx have an Hα FWHM of ∼ 5000-6000 km s^-1, typical of broad-line AGNs (e.g., <cit.>) and consistent with the lower end of the FWHM distribution for TDEs <cit.>. The Hα luminosities are ∼ 5 × 10^42 erg s^-1 for Gaia18cdj and ∼ 5 × 10^43 erg s^-1 for AT2021lwx, consistent with typical AGNs <cit.>. These Hα luminosities are 1 to 2 orders of magnitude more luminous than typical TDEs <cit.>, but the ENTs are similarly bolometrically overluminous, so this does not preclude a TDE origin. *X-ray Emission. In addition to the pre-flare X-ray measurements from ROSAT presented previously, each of the ENTs in our sample has X-ray data from Swift XRT. Additionally, AT2021lwx has follow-up X-ray observations from XMM-Newton <cit.> and Chandra <cit.>. Gaia16aaw is detected in Swift XRT at a 0.3-10 keV luminosity of (1.4 ± 0.3) × 10^45 erg s^-1, slightly fainter but ultimately consistent with the pre-flare X-ray luminosity measured by ROSAT. Gaia18cdj is undetected in the XRT data, with a 3σ upper-limit of <3.6 × 10^44 erg s^-1. The luminosity estimates for Gaia16aaw and Gaia18cdj assume an absorbed power-law model with Γ = 2 and the Galactic N_H column density to convert the XRT count rates to fluxes. As the XMM-Newton and Chandra spectra for AT2021lwx had low count rates, we fit them simultaneously with an absorbed power-law model. This fit yields a column density of N_H = 1.3 × 10^21 cm^-2 and a hard photon index of Γ = 0.6, indicating the formation of a corona. We use this spectral shape to convert the count rates for AT2021lwx into fluxes. Throughout the flare, AT2021lwx shows X-ray emission at levels of a few times 10^44 erg s^-1. The X-ray emission of AT2021lwx becomes harder as the source fades, consistent with AGN-like X-ray emission (e.g., <cit.>). The source X-ray luminosities are provided in Table <ref>. We can also use the ratio of optical to X-ray emission to place these ENTs in context with other accretion-powered transients. We interpolated the UV/optical bolometric light curve to the time of the X-ray observations and compare the three ENTs to a sample of TDEs and ANTs well-observed in both the X-ray and UV/optical. The comparison TDEs are ASASSN-14li <cit.>, ASASSN-15oi <cit.>, ASASSN-18ul <cit.>, ASASSN-19dj <cit.>, and AT2019dsg (ZTF19aapreis; <cit.>). The ANTs are ASASSN-17jz <cit.>, ASASSN-18jd <cit.>, ASASSN-18el <cit.>, AT2019pev (ZTF19abvgxrq; <cit.>), and ASASSN-20hx <cit.>. We interpolate these events in the same manner and remove upper limits. The results are shown in Figure <ref>. Each of the ENTs is overluminous in both X-ray and UV/optical emission as compared to the TDE and ANT samples. This is especially true when considering that the first X-ray constraints for the ENTs come at a rest-frame phase of ∼ 850, ∼ 400, and ∼ 300 days after peak for Gaia16aaw, Gaia18cdj, and AT2021lwx, respectively, when the emission has likely faded. Gaia16aaw in particular exhibits an X-ray to UV/optical luminosity ratio of ≈ 20 during the epoch of its X-ray observation, significantly higher than the other ENTs and any of the well-observed TDEs or ANTs in the comparison sample. However, the X-ray luminosity of Gaia16aaw as measured by Swift/XRT is consistent given the uncertainties with the pre-flare X-ray luminosity measured by ROSAT, so the high X-ray to UV/optical ratio may be the result of consistent X-ray fluxes and a declining UV/optical light curve. Assuming the X-ray emission is constant in time, the ratio at the UV/optical peak would be 0.7, more consistent with the other transients in Fig. <ref>. Gaia18cdj is fully consistent with the range of ratios seen for other transients. This remains true for any X-ray luminosity within roughly 2 orders of magnitude of the upper limit. AT2021lwx exhibits a ratio of ≈ 0.3, similar to the majority of the TDE population and consistent with the distribution of the ANTs. All of the ENT ratios are broadly similar to the ratios seen for typical AGNs (e.g., <cit.>). Combining the X-ray and UV/optical measurements, we can also estimate the peak Eddington ratios of the ENTs as ≈ 0.11 for Gaia16aaw, ≈ 0.16 for Gaia18cdj, and ≥ 0.2 for AT2021lwx. These Eddington ratios are similar to TDEs (e.g., <cit.>), strong AGNs (e.g., <cit.>), and many ANTs (e.g,, <cit.>). § RATE ESTIMATE We can attempt to constrain the physical mechanism powering these ENTs through an estimate of their rates as compared to the rates of other classes of events. We calculate the rates following <cit.> as R = ∑_i=1^N R_i = ∑_i=1^N1/t_survey,i / (1 + z_i)1/V_max, i f_loss where t_survey,i is the time span of the survey detecting the ith transient, corrected into the rest-frame by (1 + z_i). V_max, i is the maximum co-moving volume in which that transient would be detected by the survey. The completeness factor f_loss is taken to be the fraction of the sky observed by the survey since these transients exist for much longer than a single observing season. Finally, N is the total number of observed transients. We list the assumed t_survey, limiting magnitude, and f_loss for each source of data in Table <ref>. Given a typical decay rate of ≈ 0.1 mag month^-1, we require that the source reaches a peak of 0.3 mag brighter than the limiting magnitude to count as a detection. This gives at least 3 months over which the source is above the limiting magnitude, sufficient for detection even in lower cadence surveys such as Gaia. We estimated a rate uncertainty by dividing the estimated rate by the number of sources contributing and re-scaling using the 1σ Poisson confidence intervals computed by <cit.> for that number of events. This yields a rate of 1.0^+1.0_-0.6× 10^-3 Gpc^-1 yr^-1 based on the three observed ENTs Gaia16aaw, Gaia18cdj, and AT2021lwx. Gaia did not trigger on AT2021lwx which means that our search of Gaia for ENTs is not complete and that f_loss should be smaller. Since we are unable to quantify the completeness based on the internal survey operations, we will treat this rate estimate as a lower limit. Regardless of the specific methodology, the estimated rate of these events is far below the local rates of luminous transients like TDEs (≈100-500 Gpc^-1 yr^-1; <cit.>) and SLSNe-I (≈10-100 Gpc^-1 yr^-1; <cit.>) and are significantly lower than SNe Ia (≈2 × 10^4 Gpc^-1 yr^-1; e.g., <cit.>). However, a more fair comparison to make is between the rates of these events at z ∼ 1. The rate of SLSNe at this redshift grows to ≈ 50-170 Gpc^-1 yr^-1 <cit.> and the TDE rate is expected to drop by a factor of roughly 6 <cit.> due to the declining SMBH number density at higher redshifts. Thus, even at z ∼ 1, the ENTs are at least a factor of several thousand rarer than typical SLSNe or TDEs. We additionally re-estimate the rate for ASASSN-15lh-like events given the longer ASAS-SN baseline without any additional detections. This yields a rate roughly 40 times higher than the estimated ENT rate. § EXAMINING POTENTIAL PHYSICAL ORIGINS FOR THE FLARES *Gravitational Lensing. First, we consider if the extreme luminosities inferred for the ENTs may in fact be due to gravitational lensing. While high redshift supernova have been observed to be magnified by factors of 10 or more (e.g., <cit.>), these high magnifications require an intervening massive galaxy or galaxy cluster. The photometry of the ENT host-galaxies presented in Fig. <ref> shows no evidence for foreground lens galaxies or clusters. Similarly, the spectra for Gaia16aaw and Gaia18cdj show no signs of foreground galaxies and none of the apparent lower-redshift absorption doublets seen in the spectrum of AT2021lwx are well-matched to Mg II or other typical absorption lines. The lensing cross-section scales as σ_v^4, where σ_v is the velocity dispersion of the lens galaxy (e.g., <cit.>). Since we find no evidence for a massive foreground galaxy, any potential lens galaxy must be low mass, which greatly decreases the likelihood of lensing. Nevertheless, we need to consider the so-called magnification bias <cit.>, which increases the probability of observing a lensed intrinsically-faint source. As Gaia16aaw is detected at a flux of 4 × 10^-13 erg s^-1 cm^-2 in pre-flare ROSAT coverage, we will assume that the number counts of the ENT host galaxies can be reasonably well-described by power-law distribution of dN/dF ∼ F^-2.5, as for similarly bright AGNs <cit.>. Such a slope corresponds to a factor of ≈ 10 in terms of magnification bias <cit.>. This bias allows a galaxy with ∼60% of the velocity dispersion, or 15% of the mass <cit.>, to produce the same lensing probability as a calculation without accounting for magnification bias. Regardless, as even the most luminous normal supernovae require a magnification of >10 and given our strong constraints on the lack of massive foreground systems, we find a lensing explanation unlikely. *Luminous Supernovae. Most supernovae are powered by the radioactive decay of ^56Ni into ^56Co and later the decay of ^56Co into stable ^56Fe. If we assume that the ENTs in our sample are powered by ^56Ni decay, we can use the scaling between nickel mass and energy production <cit.> to estimate the initial mass of ^56Ni. Even for our least energetic event, Gaia16aaw, the nickel mass required is 280 M_⊙. This quickly increases to more than 1000 M_⊙ for Gaia18cdj and AT2021lwx. Such an explanation is clearly unphysical. There are classes of supernovae that are not solely powered by radioactive decay. In particular, the Type I and Type II superluminous supernovae <cit.> have peak luminosities higher than can be explained by radioactive decay. In the case of SLSNe I, a plausible explanation is the spin-down of a magnetar <cit.>. The rotational energy of a typical magnetar, with a mass of 1.4 M_⊙ and a spin period of 1 ms is 3 × 10^52 erg, which is insufficient to power any of the ENTs in our sample. Even if we assume a magnetar with the mass of the most massive known neutron star, PSR J0952-0607 at 2.35 _⊙ <cit.>, and the maximum neutron star spin period of 0.9 ms <cit.>, the maximum rotational energy is ≈ 7 × 10^52 erg. This is well below the energies of Gaia18cdj and AT2021lwx and would require an unrealistically high conversion efficiency to UV/optical emission for Gaia16aaw. Thus, magnetars cannot power the ENTs. For SLSNe II, a large amount of kinetic energy in the supernova ejecta is converted to light when it shocks with the surrounding circumstellar medium (CSM). Following <cit.>, we can estimate the mass required to power the observed luminosity of these ENTs. The relevant physical parameters are the peak luminosity, rise time, and photospheric velocity. Given the range of observed luminosities and rise times and assuming a typical velocity of 6000 km s^-1 we require over 1000 M_⊙ of CSM to power the ENTs. The CSM mass can be lowered to ∼ 100 M_⊙ for a velocity of ∼ 20000 km s^-1, but this velocity is significantly higher than typical SLSNe II <cit.> and the ENT spectra show no signs of substantial velocity offsets in their spectra. Thus, we also rule out an individual stellar origin for ENTs. *AGN Flares. With strong gravitational lensing and various classes of SNe ruled out, we arrive at accretion onto a black hole as the most likely explanation for these ENTs. This is not particularly surprising as the ENTs are located in their host nuclei. More importantly, accretion is the most efficient known method of converting large amounts of mass into energy in an astrophysical system. Each of the ENT hosts shows some evidence for hosting an AGN or AGN-like behavior in the flare evolution itself. Therefore, in addition to a TDE origin for the flares, we must consider whether or not AGN activity can produce these transients. We first consider if these flares can be the extreme end of typical AGN stochastic variability. Extrapolating from Figure 3 of <cit.>, we estimate the fraction of AGNs undergoing variability at levels similar to the ENT flares on similarly long timescales. For the case of Gaia16aaw, with a Δ m = -2 mag flare, the fraction of AGNs with a similarly large brightening on long timescales is 0.01%. For Gaia18cdj and AT2021lwx, with Δ m = -3 mag, the fraction is much lower, at 9 × 10^-5 %. While these estimates are for continuous rather than impulsive variability, they still provide a useful constraint on the likelihood of normal AGNs to exhibit such large flares. These fractions combined with the number density of AGNs at z = 1 with L ≈ 10^45 erg s^-1 (roughly 300 Gpc^-3; <cit.>), suggest that stochastic AGN variability is unlikely to produce these ENTs. There are other exotic mechanisms that have the potential to produce luminous flares in AGNs. One model is a transient powered by the interaction of an AGN disk wind with the surrounding broad line region (BLR) clouds <cit.>. Qualitatively this is similar to the scenario for Type IIn SNe or SLSNe-II, where shocks between outflowing mass and a dense medium produce a strong radiative transient. Under the assumption that the kinetic energy is fully converted into radiated luminosity and assuming a typical disk wind velocity of 0.1c <cit.>, our measured energies imply ∼ 6 - 28 solar masses of ejected material. These are lower limits, as the required ejecta masses increase with the inverse of the covering fraction of the BLR clouds <cit.> or relaxing the assumption that the conversion of kinetic to radiative energy is 100% efficient. Even a modest 50% efficiency of energy conversion (higher than estimated for SNe IIn; <cit.>) and a typical BLR covering fraction of 0.4 <cit.> increases these mass estimates by a factor of 5. It is possible to avoid such high ejecta masses if the disk wind velocity is increased, but we find no evidence for higher velocities in the ENT spectra and only the fastest winds have significantly larger velocities <cit.>. Furthermore, while an increase in wind velocity decreases the ejecta mass required, it also decreases the emission timescales. Another potential problem with this model is that <cit.> suggest that the most likely mechanism for launching a disk wind is the limit-cycle oscillation (e.g., <cit.>). This requires an AGN accreting near Eddington, which is ruled out for Gaia18cdj and highly unlikely for AT2021lwx. Although the duty cycle for limit-cycle oscillations in AGNs is not well constrained, it can be of order 10% <cit.>, further exacerbating the situation. Given these issues, we find this model to be an unlikely explanation for the ENTs. Another potential mechanism is a smooth flare powered by a disk instability which allows rapid accretion onto the SMBH (e.g., <cit.>). These flares are proposed to occur in systems where the disk is truncated and the inner portions of the disk have a high temperature and low density <cit.>. This is very unlikely to be the case for Gaia16aaw given the high X-ray luminosity prior to the flare. In this model, both the peak accretion rate and timescale of the flare depend on the truncation radius of the disk. With the wide range of implied physical states for the pre-flare AGNs in the ENT host galaxies, it is unlikely that such a model would naturally produce our observed sample of ENTs, each with a similar peak luminosity and timescale. *Tidal Disruption Events. Next, we examine the possibility that the ENTs are extreme examples of tidal disruption events. Typical TDEs have peak luminosities up to ∼ 5 × 10^44 (e.g., <cit.>) although the recently-discovered class of featureless TDEs can be more luminous <cit.>. In contrast to the ENTs, known TDEs rarely have decay timescales more than ≈100 days. However, the smooth evolution of essentially all TDEs makes them a promising candidate to explain the smooth ENT flares. The tidal disruption of a massive main-sequence (MS) star naturally results in a more luminous flare as the peak accretion rate at fixed SMBH mass scales as Ṁ_peak / Ṁ_edd∼ M_*^1.1 <cit.> for typical stellar mass-radius relations <cit.>. However, at fixed SMBH mass, the disruption of a more massive star is expected to result in a shorter flare, although this is a much weaker effect with t_fb∼ M_*^-0.1. Thus, it is plausible to power a luminous and long-lived flare through the tidal disruption of a massive star on a massive SMBH. The estimated SMBH masses for the ENTs are close to the Hills mass <cit.>, but a TDE can still occur given the higher stellar masses required. Additionally there is a large scatter on the SMBH-galaxy scaling relations that makes the SMBH estimates uncertain at the level of ∼0.5 dex. Indeed, the long timescales of the ENTs are broadly similar to predictions of the TDEs resulting from extremely massive stars in the early universe <cit.>. Converting the emitted energies to accreted mass provides a constraint on the minimum stellar mass that could possibly have powered these ENTs. The highest ENT energies correspond to ≈ 1.4 M_⊙ of accreted mass for 10% efficiency. As roughly half the original stellar mass becomes unbound in a TDE, this places a lower limit on the stellar mass of ≳ 3 M_⊙. Following <cit.>, the TDE of 3 M_⊙ MS star on a SMBH with mass similar to the ENTs would result in a flare with a characteristic timescale of ≈ 600 days and a peak Eddington ratio of ≈ 0.1, reasonably well-matched to the observed ENT flares. More massive MS stars, up to roughly 10 M_⊙, yield timescales and Eddington ratios compatible with the ENT observables. Since the timescale and energetics of an intermediate-mass TDE appear consistent with the ENTs, the next consideration is whether the expected rates of such events are compatible with our estimated ENT rate. Let us assume that the ENTs are the result of ∼3 10 M_⊙ stars, which satisfies the constraints from the emitted energy, timescale, and peak Eddington ratio. From the estimated local rate of TDEs <cit.> and the expected redshift evolution <cit.>, we find a minimum TDE rate at z = 1 of ≈ 15 Gpc^-1 yr^-1. For standard initial mass functions <cit.>, the high-mass slope is α = -2.35. Thus, there are 12 times fewer 3 - 10 M_⊙ stars than 0.5 2 M_⊙ stars, a range consistent with the local TDE population <cit.>. While this calculation assumes a typical stellar IMF, there is evidence that the stellar population of the Milky Way Galactic center prefers a top-heavy IMF (e.g., <cit.>), with a shallower high-mass slope. Similarly, the population of TDEs appears consistent either with top-heavy IMFs in the nuclei of some TDE host galaxies or the preferential disruption of moderately massive stars <cit.>. Either of these effects will increase the expected rate of massive star TDEs relative to the above scaling. The stellar lifetimes are also important to consider, as stars with short lifetimes may not live long enough to be scattered onto an orbit that results in a TDE. The lifetimes of 3 - 10 M_⊙ stars are approximately 90 times lower than those of 0.5 - 2 M_⊙ stars. If we scale the rates by the birth abundance and the lifetimes, the intrinsic rate of 3 - 10 M_⊙ TDEs would be roughly 1000 times lower than 0.5 - 2 M_⊙ star TDEs. Scaling from the expected rate of solar mass TDEs at z = 1, the 3 - 10 M_⊙ TDE rate at this redshift is ≈ 1.5 × 10^-2 Gpc^-1 yr^-1, sufficient to explain the ENTs we find.
http://arxiv.org/abs/2405.09994v1
20240516112720
Probing neutrino-nucleus interaction in DUNE and MicroBooNE
[ "R K Pradhan", "R Lalnuntluanga", "A Giri" ]
hep-ph
[ "hep-ph", "hep-ex", "nucl-th" ]
e1mailto:kumarriteshpradhan@gmail.comkumarriteshpradhan@gmail.com e2mailto:ph19resch11003@iith.ac.intluangaralte.phy@gmail.com e3mailto:giria@phy.iith.ac.ingiria@phy.iith.ac.in Department of Physics, Indian Institute of Technology Hyderabad, Hyderabad, 502284, Telangana, India Probing neutrino-nucleus interaction in DUNE and MicroBooNE R K Pradhan e1,addr1 R Lalnuntluangae2,addr1 and A Giri e3,addr1 ===================================================================================== Abstract: The neutrino experiments utilize heavy nuclear targets to achieve high statistics neutrino-nucleus interaction event rate, which leads to systematic uncertainties in the oscillation parameters due to the nuclear effects and uncertainties in the cross-section. Understanding the interaction of neutrinos with the nucleus becomes crucial in determining the oscillation parameters with high precision. We investigate the uncertainty in quasi-elastic interaction due to nuclear effects by selecting exactly 1 proton, 0 pions, and any number of neutrons in the final state using DUNE and MicroBooNE detectors, and the effects on oscillation parameters in the DUNE detector. The kinematic method along with this selection can be used for accurate neutrino energy reconstruction in the quasi-elastic channel where the nuclear effects are inevitable. § INTRODUCTION The primary goal of the current and future neutrino experiments is precisely measuring the neutrino oscillation parameters. The oscillation probability depends on the neutrino energy and the neutrino beam is not mono-energetic, it spreads in a broad range of energy spectrum due to their production in the decay of produced hadrons. So, an event-by-event neutrino energy reconstruction is required for a better understanding of uncertainty in neutrino oscillation. Due to high statistics requirements, neutrino experiments use heavy nuclear targets (Argon <cit.>, Iron <cit.>, Water <cit.>, Oxygen <cit.>, etc), which arises complications in the reconstruction due to the complexity of the nucleus. For accurate neutrino energy reconstruction, understanding neutrino-nucleus interactions is very crucial. Most Long-Baseline Neutrino Experiments (LBNE) run in the few GeV energy regions where quasi-elastic scattering is one of the most important interaction channels however there is a significant contribution of 2p-2h interaction which makes the situation complicated <cit.>. In this work, we focus on muon neutrino charge current quasi-elastic (CCQE) scattering (ν_μ n →μ^- p). In a nucleus, the neutron is neither free nor at rest, but it moves and is bound inside a nuclear potential. To understand CCQE scattering, one can refer to impulse approximation (IA) <cit.>, in which interactions occur on the individual nucleons. In the IA, the neutrino interacts with a pair of nucleon inside the nucleus, or a bound nucleon followed by the final state interaction (FSI) <cit.>. Due to FSI, the proton produced in primary interaction undergoes various hadronic scattering such as (in)elastic scattering, pion production, hadron absorption, and charge exchange which results in knocked-out nucleons and mesons in the final state. This results in the misidentification of non-QE events as QE events. Pion production is the major background to the QE scattering events. In some events, these produced pions again get absorbed leading to two nucleon knock-out, called 2p-2h scattering. The cross-section of CCQE-like interactions measured in T2K can be explored in Ref. <cit.>. Due to FSI, a CCQE can also produce pion, thus can't be classified as CCQE-like. These backgrounds lead to an uncertainty in the neutrino energy reconstruction, which impacts the oscillation parameters. The effects on the parameters of oscillation for experiments such as T2K and MiniBooNE are studied here <cit.>. Monte Carlo event generators are used for the simulations of neutrino interactions, developed with different nuclear models. They are used for predictions and improvement of the experiments. The generators such as GENIE <cit.>, NuWro <cit.>, GiBUU <cit.>, and NEUT <cit.> are extensively used for physics analysis. In this work, we aim to select high-purity CCQE events for DUNE detectors <cit.> using the realistic particle thresholds, and its effects on the oscillation parameters is studied using both GENIE and NuWro. Previous studies for LBNE using the GiBUU model can be found in Ref. <cit.>. The analysis methods for our event selection have been implemented for the MicroBooNE detector <cit.> using GENIE and NuWro models. This paper is organized as follows. In section <ref>, we present the formalism for the neutrino interactions with the nuclear target and the methods for reconstructing the energy of the neutrino. The Monte Carlo event generators are described in section <ref>. The simulation specifications of the DUNE and MicroBooNE detectors are mentioned in section <ref>. We then discuss the results from our analysis in section <ref>, followed by the conclusion in section <ref>. § FORMALISM Consider a charged current neutrino interaction with a nuclear target knocking out n nucleons and producing m mesons. These produced particles can be used for the neutrino energy reconstruction, using the kinematic and calorimetric methods <cit.>. The reconstructed energy from the generalized kinematic approach <cit.> is, E_ν = 2(nM-ϵ_n)E_l + W^2 - (nM-ϵ_n)^2 - m_l^2/2(nM-ϵ_n-E_l+|k_l| cos θ) The invariant hadronic mass squared is defined as, W^2 = (∑_i E_p_i'+ ∑_j E_h_j')^2 - (∑_i p_i'+ ∑_j h_j' )^2 where, E_p' and p' (E_h' and h') are the energy, and momentum of final state nucleons (mesons) respectively. M (m_l) is the mass of the final state nucleon (lepton), E_l and k_l are the energy and momentum of the outgoing lepton respectively, and θ is the scattering angle, ϵ_n the neutron separation energy. For a pure charge current quasi-elastic (CCQE) scattering, where the hit nucleon (neutron) is at rest, the kinematic method for reconstructing the neutrino energy <cit.>, E_ν=2(M_n-E_B)E_l -(E_B^2 -2M_nE_B+m_l^2+Δ M^2)/2(M_n-E_B-E_l+|k_l| cos θ) where, M_n is the free neutron rest mass, Δ M^2 = M_n^2-M_p^2, M_p is the rest mass of the proton, and E_B the binding energy. For better accuracy in the analysis, rather than assuming constant binding energy value, we consider a distribution for the neutron excitation energy for the nucleus <cit.>, for both ^12C and ^40Ar target. A Gaussian distribution of separation energy E with mean E_α, and deviation σ_α, is considered according to table <ref> for ^12C <cit.>, and in table <ref> for ^40Ar <cit.>. The probability distribution for the separation energy is given by, P(E)=1/N∑_α n_αG(E-E_α,σ_α) where the sum of neutrons N = ∑_α n_α and G(E-E_α,σ_α) is the distribution function. For oscillation analysis, the survival probability of muon neutrino can be calculated by using the following expression <cit.>, P_ν_μ→ν_μ≈ 1 - sin^22θ_μμ sin^2 Δ m^2_μμL/4E_ν with sin^2θ_μμ=cos^2θ_13 sin^2θ_23 Δ m^2_μμ = sin^2θ_12Δ m_31^2 + cos^2θ_12Δ m_32^2 + cosδ_CP sinθ_13 sin2θ_12 tanθ_23Δ m_21^2 The values of the oscillation parameters, Δ m^2 and θ are taken from Ref. <cit.>. The distance of the DUNE far detector is L ≈ 1300 km far from the near detector. § MONTE CARLO EVENT GENERATOR In this work, two MC generators, GENIE and NuWro are used for simulating the neutrino-nucleus interactions. Both GENIE and NuWro are used by accelerator-based neutrino experiments. The Fermi Gas Model <cit.> is implemented to define the impact of the nuclear environment considering nucleon-nucleon correlation effects <cit.> in GENIE, and the spectral function <cit.> in NuWro. The default model for QE scattering in GENIE is the Llewellyn-Smith (LS) model <cit.>. The new models for QE scattering and MEC/2p-2h processes in GENIE are the Valencia QE model <cit.> and SuSAv2 <cit.>. The Valencia model is based on the Local Fermi Gas model with Coulomb correction effects and Random Phase Approximation (RPA). In contrast, the SuSAv2 is based on the Relativistic Mean Field (RMF) theory. In NuWro, the QE interactions are described by the LS model with options for vector and dipole axial vector form factors. GENIE implements the axial mass M_A extending from 0.99 to 1.2 GeV/c^2, whereas NuWro confines this parameter in a range of 0.94-1.03 GeV/c^2. In this work, the value of the CCQE axial mass is considered as ≈ 0.96 GeV/c^2 in both GENIE and NuWro, and the vector form factors used are BBA07 <cit.> and BBA05 <cit.> respectively. GENIE has 4 FSI models: hA, hN, INCL++, and Geant4 <cit.>, while in NuWro, the FSI is described by the cascade model based on the algorithm by Metropolis et al. <cit.>. Empirical MEC model <cit.> is also available for MEC/2p-2h scattering in GENIE, and transverse enhancement model <cit.> and Marteau-Martini model <cit.> are also available in NuWro for 2p-2h scattering. The baryonic resonances are modeled using the Berger-Sehgal (BS) model <cit.> in GENIE. NuWro uses the Rein-Sehgal (RS) model <cit.> for Δ(1232) resonance, and the Adler-Rarita-Schwinger model <cit.> for higher resonances.Other available resonance models in GENIE are Kuzmin-Lyubushkin-Naumov (KLN) <cit.> and the RS model. In the inelastic region (DIS), both the generators use the Bodek-Yang Model <cit.> for the cross-section calculation, and utilize PYTHIA <cit.> to produce hadronic final states with different values of PYTHIA parameters such as invariant hadronic mass W. GENIE employs it to W = 1.8 or 2.0 GeV with KNO scaling <cit.> while NuWro is to 1.6 GeV. The invariant hadronic mass threshold used in this work is W = 1.9 GeV in GENIE and 1.6 GeV in NuWro. § SIMULATION DETAILS In this paper, nuclear effects on energy reconstruction are analyzed for the DUNE near and far detector and the MicroBooNE detector. DUNE mainly uses Argon as the target material. The proposed DUNE near detector HpgTPC <cit.> consists of Argon and Methane (CH_4), and liquid Argon as the target material in the far detector. MicroBooNE also uses liquid Argon targets. CC ν_μ interaction are simulated for Argon and Carbon target using the DUNE <cit.> and MicroBooNE ν_μ flux <cit.> as shown in Fig. <ref>. The GENIE v3.01.00 (tune G18_10a_02_11a) and NuWro-21.09.2 are used for the simulation. We consider realistic simulations and include the Quasi-elastic (QE), Meson exchange (MEC/2p-2h), Resonance (RES), and Deep Inelastic Scattering (DIS) in neutrino mode. In the RES channel, we consider only the first resonance region, Δ or P_33(1232). For the simulation, we have used the Nieves model for QE, the Berger Sehgal model for the resonance, and the CCQE axis mass (M_A) is considered to be 0.96 GeV/c^2. For FSI, the hA model is employed in GENIE, and the Oset model in NuWro. To study the FSI effect, we have also simulated the interaction events by switching ON/OFF the FSI in both MC generators. Ideally, CCQE interaction can be identified as 1 proton and 0 pions in the final state. But it could be possible that the pion produced due to FSI gets absorbed inside the nucleus knocking out nucleons. We consider a selection of exactly 1 proton, 0 pions, and any number of neutrons in the final state <cit.> for the CCQE interaction. The unobserved neutrons in the detector are a consequence of the nuclear effects. Due to limitations in the sensitivity of detectors, all the produced particles cannot be detected. We have applied the detector cuts for both the near and far detectors. The detection thresholds for the DUNE ND are, the total energy cut for muon is 226 MeV <cit.> (≈ 200 MeV momentum), and a minimum kinetic energy threshold of 3 MeV for proton <cit.>. The neutrons in the detector have a threshold of kinetic energy from 50MeV to 700 MeV <cit.>. In case of FD, muons have a detector threshold of 30 MeV kinetic energy, and 50 MeV for protons and neutrons <cit.>. A study on the proton kinetic energy threshold of 20 MeV with 80% efficiency can be found in Ref. <cit.>. In the MicroBooNE detector, the leading proton has a threshold momentum of 300 MeV/c (≈ 47 MeV kinetic energy ), and the momentum threshold for muon is 100 MeV/c <cit.>. The selection (1 proton + 0 pions + X neutrons) with the corresponding detector thresholds for the DUNE and MicroBooNE detectors are applied to reconstruct the neutrino energy for the CCQE. § RESULTS The event distributions of neutrino energy for Argon in both ND and FD are shown in Fig. <ref>. For the ND, the reconstructed energy using the kinematic method (green) is shifted by ≈ 0.4 GeV towards lower energies, compared to the true energy distribution (black). The distribution at FD after oscillation is given on the right panel (Fig. <ref>). It can be observed clearly that without the selection, the distribution is distorted with flattening of the minima around 2.4 GeV. The event rate is significantly low at the second maxima at 1.4 GeV. When the selection (1 proton + 0 pions + X neutrons) is considered along with the DUNE ND and FD detector cuts, the reconstructed energy distribution (blue) agrees with the true energy for both ND and FD. The shift of reconstructed energy is reduced to less than 100 MeV compared to true neutrino energy. The results using GENIE are represented as solid lines. Similarly, NuWro (dotted) results show the same effects as GENIE. For evaluating the uncertainty in the reconstruction, we consider the event ratio distribution of true energy with the reconstructed energy which is shown in Fig.<ref>. To further quantify the effect of FSI, we consider the ratio with FSI and without FSI. For the ND, the ratio is close to 1 but in the lower energy region (< 1GeV), the reconstructed energy differs from the true energy due to nuclear effects. As a result of the nuclear effects, 91 % of events are true CCQE, and the remaining events are mostly contributed by 2p-2h and DIS, out of the selected events. In the context of far detector, the reconstructed energy disagrees with the true energy in the maxima and minima regions as there is a downward shift in the reconstructed events. When the FSI is switched off, it can be seen that the ratio is closer to unity in the lower energy range, which indicates that the effect of the FSI contributes to the uncertainty in the reconstruction. The probability distribution of neutrino energy for Carbon and Argon in the MicroBooNE detector is given in Fig. <ref>. There is a shift of ≈ 0.2 GeV in the reconstructed energy without the selection i.e. the kinematic method. However, the reconstructed energy agrees with true energy when the selection along with detector thresholds are considered. Out of the selected events, 96% of events are from true CCQE contributions. From the ratio of true neutrino energy to reconstructed energy in Fig. <ref>, the selection criteria can be considered to significantly identify the CCQE interaction, and the uncertainty could be due to the imapact of final state interactions. The effects on the neutrino oscillation parameters using the selection (1 proton + 0 pions + X neutrons) are studied. The survival probability of muon neutrinos is shown in Fig. <ref>. The true survival probability is calculated using equation <ref>, and the same parameters are used from <cit.> as a function of true neutrino energy. Assuming the experimental analysis, the survival probability is calculated by taking the ratio of oscillated (FD) and unoscillated (ND) event distributions. In Fig. <ref>, the black curve shows the survival probability as a function of true energy, the blue curve represents the survival probability as a function of the reconstructed energy by considering the selection with detector cuts and the FSI effects, and without the FSI is shown by the red curve. We can observe that there is a significant influence in the absolute values, both at the first maximum and first minimum, while the positions of maximum and minimum are less affected when we consider the selections. One can see the effect of FSI in the lower energy range and at the first maxima. This indicates a notable discrepancy in the mixing angle, while the impact on Δ m^2 is comparatively negligible. The same effect has been observed for T2K by Coloma et al. <cit.> as well. § CONCLUSION The purity of QE interactions selection for the MicroBooNE detector, and the DUNE near and far detectors and their effect on oscillation parameters has been studied using the Monte Carlo neutrino event generators GENIE and NuWro. The physics potential studies on LBNE have shown that an energy resolution of 100 MeV is required to differentiate between different physics properties <cit.>. The selection of 1 proton, 0 pions, and X neutrons for CCQE shows a weighty discrepancy between the reconstructed and the true neutrino energy approximately by 100 MeV which is the required resolution. This shift of around 100 MeV is observed in both the ND and FD for both generators. The uncertainty in reconstruction due to the nuclear effects notably affects the mixing angle, with comparatively lesser effects on Δ m^2. Also, the ratio of true and reconstructed energy distribution in the high-energy regions indicates the viability of this method in higher-energy experiments. § ACKNOWLEDGEMENTS R K Pradhan acknowledges the DST-INSPIRE grant (2022/IF220293) for financial support. R Lalnuntluanga thanked the Council of Scientific & Industrial Research (file number: 09/1001(0054)/2019-EMR-I) for the financial grant. R Lalnuntluanga and A Giri credited the grant support of the Department of Science and Technology (SR/MF/PS-01/2016-IITH/G). sn-vancouver
http://arxiv.org/abs/2405.08787v1
20240514173622
Explicit Orthogonal Arrays and Universal Hashing with Arbitrary Parameters
[ "Nicholas Harvey", "Arvin Sahami" ]
cs.DS
[ "cs.DS", "cs.CC", "math.CO", "math.ST", "stat.TH" ]
[ Xavier Jarque May 20, 2024 ================= Orthogonal arrays are a type of combinatorial design that were developed in the 1940s in the design of statistical experiments. In 1947, Rao proved a lower bound on the size of any orthogonal array, and raised the problem of constructing arrays of minimum size. Kuperberg, Lovett and Peled (2017) gave a non-constructive existence proof of orthogonal arrays whose size is near-optimal (i.e., within a polynomial of Rao's lower bound), leaving open the question of an algorithmic construction. We give the first explicit, deterministic, algorithmic construction of orthogonal arrays achieving near-optimal size for all parameters. Our construction uses algebraic geometry codes. In pseudorandomness, the notions of t-independent generators or t-independent hash functions are equivalent to orthogonal arrays. Classical constructions of t hash functions are known when the size of the codomain is a prime power, but very few constructions are known for an arbitrary codomain. Our construction yields algorithmically efficient t hash functions for arbitrary domain and codomain. § INTRODUCTION Orthogonal arrays are a concept in the design of statistical experiments, first proposed by C. R. Rao in the 1940s. A detailed exposition of this subject can be found in reference books <cit.> <cit.>. An orthogonal array with parameters [s,m,n,t] is a matrix with s rows, m columns and entries in [n]=1,…,n such that, for every set of t columns, those columns contain each tuple in [n]^t exactly s/n^t times. A long history of research considers the problem of, given m, n and t, finding an orthogonal array with small s. If n is a prime power, a classical construction due to Bush[This is essentially equivalent to a Reed-Solomon code, and predates it by eight years.] in 1952 <cit.> <cit.> gives an orthogonal array with exactly s = n^t rows if m ≤ n. This is exactly optimal (the definition requires that s/n^t is a positive integer). For m > n, a simple modification of this construction has s ≤ (nm)^t rows; however this is no longer optimal. Indeed, a lower bound of Rao[ For the case n=2, this can be found in <cit.>. In the special case of linear orthogonal arrays, Rao's bound is equivalent to the dual of the generalized Hamming bound on codes, and predates it by three years. ] <cit.> implies that every orthogonal array has RaoBound s  ≥ mt/2 (n-1)^t/2 ≥  (cmn/t)^c t, for some constant c>0. In contrast, Bush's upper bound lacks the denominator of t, and only applies when n is a prime power. In general, there are very few constructions of orthogonal arrays when n is not a prime power; see <cit.>. Some important problems are the constructions of these arrays with the optimum values of m for a given t and s.  Rao, 1947 <cit.> Research Problem 2.33. For fixed values of n and t, and large m, how far are the Rao bounds from the truth? Hedayat, Sloane, Stufken 1999 <cit.> (Note: notation has been adjusted to match ours.) Previous constructions. One might imagine that a simple product construction can reduce the general case to the prime power case. This is possible (and was known to Bose in 1950 <cit.>) but it is suboptimal. First factor n into a product of prime powers ∏_ℓ≤ d p_ℓ^a_ℓ. For each ℓ, build an orthogonal array A^(ℓ) which in which n is taken to be p_ℓ^a_ℓ. Then, define A to be an entry-wise Cartesian product of A^(1),…,A^(d). This gives an orthogonal array, but it is unfortunately somewhat large. Applying Rao's lower bound separately to each A^(ℓ), the number of rows of A is at least ∏_ℓ≤ d (c m p_ℓ^a_ℓ/t)^ct = (cm/t)^c d t n^ct. This does not asymptotically match Rao's lower bound for all parameters due to the factor d in the exponent. So this construction cannot in general prove that Rao's lower bound is tight. Note that d can be Ω(log(n)/loglog n) when n is the product of the first d primes. If n is a prime power, then it is known how to explicitly construct orthogonal arrays with s ≤ (C m n / t)^Ct for some constant C. We will call such a construction near-optimal, meaning that it matches Rao's lower bound up to the value of the constant C. The anonymous reviewers of this manuscript have informed us of this result, which is stated in <cit.> using the language of pseudorandom generators. That construction, like our results in AG, is based on the use of algebraic geometry codes. If n is not restricted to be a prime power, then Kuperberg, Lovett and Peled <cit.> give a non-explicit construction, for all m,n,t, of orthogonal arrays that are also near-optimal, so s ≤ (Cmn/t)^Ct for some constant C. However their approach is randomized, and they only prove an exponentially small lower bound on the probability of success. One of their open questions is whether an algorithmic version of their construction exists <cit.>. Our results. We also give a near-optimal construction of orthogonal arrays. Our construction is explicit, algorithmic, works for all m, n, t, and is apparently unrelated to the construction of Kuperberg, Lovett and Peled. Main For any m,n,t (with n not necessarily a prime power), there is an explicit description of an orthogonal array with parameters [s,m,n,t] where s=(cmn/t)^35t, and c is a universal constant. The array can be constructed by a deterministic algorithm with runtime (sm). To prove this, we give an abstract construction based on coding theory. In RS we instantiate that construction with Reed-Solomon codes, obtaining a simple, easily implementable, deterministic construction with s = O(m+n)^6t. In AG we instantiate the construction instead with algebraic geometry codes to obtain a deterministic, near-optimal construction with s = O(mn/t)^35t. In GV we instantiate the construction instead with random linear codes to obtain a non-explicit construction that has near-optimal size, and can be efficiently constructed by a randomized algorithm. §.§ Hash Functions Hash functions of various sorts are crucial tools in pseudorandomness and randomized algorithms. In this work we focus on t functions. Let h : [m] → [n] be a random function with some distribution. We say that h : [m] → [n] is t if h(a_1)=α_1 ∧⋯∧ h(a_t)=α_t  = 1/n^t ∀ distinct  a_1,…,a_t ∈ [m], ∀α_1,…,α_t ∈ [n]. (Obviously t ≤ m is necessary.) This is equivalent to the random variables h(a) a ∈ [m] being uniformly distributed and t-wise independent. If this property is satisfied by a function h that is uniformly chosen from a multiset , then we also say that is t. This property is also called , or simply strongly universal if t=2. Wegman and Carter <cit.> gave a simple construction of a t family when n is a prime power[ Carter and Wegman also defined the notion of weakly universal hash functions, and gave a construction that allows n to be an arbitrary integer <cit.>. ] and m=n. In this construction, corresponds to the polynomials of degree t-1 over the field _n, so =n^t and the space required to represent a member of is O(t log n) bits. In general, if n is a prime power and m is arbitrary, then a modified construction has = (mn)^O(t), so members of can be represented in O(t log(mn)) bits. Orthogonal arrays and t hash families are mathematically equivalent notions, as was observed by Stinson[ Another connection between design theory and derandomization is in Karp and Wigderson <cit.>.] <cit.> <cit.>. Equiv Let M be an s × m matrix with entries in [n]. Let = h_1,…,h_s be a multiset of [m] → [n] functions where h_i(j) = M_i,j ∀ j ∈ [m]. Then is t iff M is a [s,m,n,t] orthogonal array. The proof is immediate from the definitions. Thus Wegman and Carter's construction of t hash functions is identical to Bush's 1952 construction of orthogonal arrays. There are some differences in the goals of the research communities working on orthogonal arrays and t hashing. One difference is computational. Researchers in orthogonal arrays tend to be interested in showing existence, or explicit construction, of the entire matrix M. In contrast, researchers in hash functions are additionally interested in quickly sampling a hash function from and quickly evaluating the hash function. The following theorem states our main result for hash functions. MainHash For any parameters m,n,t, there exists a t hash family for which functions can be represented in O(t log(mn)) bits, can be evaluated in time O(t), and can be constructed in expected time O(t + (n+m)^) for any >0. Assuming the generalized Riemann hypothesis (GRH), the expected construction time improves to O(t) + (nm). In contrast, the Wegman-Carter construction requires that n is a prime power, also uses O(t log(nm)) bits, also can be evaluated in time O(t), and can be constructed in expected time O(t) + (nm). (If m > n, the construction must work in a field extension of _n of size at least m, so random sampling is used to find an irreducible polynomial.) Thus the efficiency of our hash function matches theirs, assuming GRH. §.§ Applications There are many algorithms that require t hash functions for t>2. Examples include cuckoo hashing <cit.>, distinct element estimation <cit.>, MinHash <cit.>, and the leftover hash lemma, which is used to construct seeded extractors <cit.>, etc. More recently, a mechanism for maximizing Nash social welfare in the percentage fee model uses t hash functions where n not a prime power <cit.>. § THE ABSTRACT CONSTRUCTION USING CODES GenericConstruction In this section we present an abstract construction of orthogonal arrays, even when n is not a prime power, using linear codes over finite fields. The subsequent sections instantiate this construction using particular codes. The Hamming distance between vectors x and y is denoted Δ(x,y) = i x_i ≠ y_i. BuildOAWorks Let m,n,t be integers with n ≥ 2 and 2 ≤ t ≤ m. Let q be a prime power satisfying q ≡ 1 n. Suppose that C ⊆_q^m is a linear code of dimension k whose dual has minimum distance at least t+1. Let b ∈_q^m satisfy Δ(b,u) ≥ m-τ for all u ∈ C. Then BuildOA, shown in BuildOA2, will produce an [s,m,n,t] orthogonal array with s = n^τ q^k. An intuitive explanation of this lemma is as follows. If u ∈ C were chosen uniformly at random, our hypotheses on C would imply that the coordinates of u will be uniformly distributed and t-wise independent. Thus, if for each u ∈ C, we insert u as a new row in the array M, then the columns of M would also be uniformly distributed and t-wise independent, which is equivalent to M being an orthogonal array. The catch, of course, is that the entries of u take values in _q, whereas the entries of M must take values in [n]. If q were a multiple of n, then this problem would be easily resolved: we could instead insert u modulo n (entry-wise), as a new row in M. The resulting matrix M is easily seen to be an orthogonal array. More generally, the same construction would work using any q/n-to-1 map f : _q → [n] applied entry-wise to u to create a row of M. However, the main scenario of interest is the one in which n is a composite number, so q (being a prime power) certainly cannot be a multiple of n. Instead, we have q ≡ 1 n, so we require a new approach to create rows of M from the vectors u ∈ C. The idea is to choose a “bad value” β to eliminate from _q, so that the number of remaining “good values” is a multiple of n, and these good values can be mapped to integers in [n] in a way that preserves the uniform distribution. This mapping, which we denote ϕ_β, between good values and [n] can be completely arbitrary, except that it needs to be a q-1/n-to-1 map in order to ensure uniformity. For example, if we identify the elements in _q with the integers in 0,…,q-1, then one may check that PhiDefϕ_(x) = 1+(((x+q-1-)  mod  q )  mod  n ) is indeed a q-1/n-to-1 map from _q ∖β to [n]. For future purposes, it will be convenient to allow this bad value to depend on the coordinate. Using the given vector b, we let b_j be the bad value for coordinate j. To eliminate these bad values from u, we must find all indices j with u_j=b_j and remap u_j to a new value. Intuitively, we would like to do so uniformly at random. However the construction must be deterministic, so instead we create multiple copies of u in which each such u_j has been replaced with every possible value in [n]. In BuildOA2, the vector v specifies how the bad entries of u will be fixed, whereas the vector w(u,v) is the modified copy of u that incorporates all the fixes and maps all entries to [n]. To control the number of copies that are created, we must bound the number of bad indices. This is ensured by the condition Δ(b,u) ≥ m - τ: every u ∈ C has at most τ bad indices. Lastly, to ensure that every two codewords in C generate the same number of rows in M, we add n^τ-ℓ copies of each modified vector w(u,v), where ℓ=m-Δ(b,u) is the number of indices at which u and b agree. BuildOAWorks Let us view C as a multiset _q of functions mapping [m] to [q] (cf. Equiv). Since the dual of C has minimum distance at least t+1, it follows that _q is t <cit.> <cit.>. BuildOA produces a matrix M with m columns, where each entry is in [n]. Let s be the total number of rows added. Since each vector u ∈ C produces exactly n^τ rows in M, and C has dimension k, we have s = n^τC = n^τ q^k, as required. By Equiv again, we will view M as a multiset = h_1,…,h_s of [m]→[n] functions. It remains to show that is t. Consider a function h ∈_q. For a sequence y ∈ [n] ^ m, define the function f' _ h, y [m] → [n] by f'_h, y(j) = ϕ_b_j(h(j)) if h(j) ≠ b_j y_j if h(j) = b_j. Here the “bad” hash values are replaced with entries of y, and the other hash values are mapped by ϕ into [n]. Let ' be the multiset made by adding one copy of f'_h, y (counting multiplicities) for each y ∈ [n]^m and h ∈_q, i.e., ' = f' _ h, y y ∈ [n]^m ,   h ∈_p . Note that each function h ∈_q gives rise to exactly n ^ m functions in ', although they are not necessarily unique. It is clear that ' is the family made by repeating each function in exactly n ^ m - τ times. Therefore ' is t if and only if is t. We will show that ' is t by showing that, for any fixed i_1 , …, i_t ∈ [m], the random variables h'(i_1), …, h'(i_t) are independent and uniformly distributed over [n] when h' ∈' is chosen uniformly at random. Note that by definition of ', uniformly sampling a function h' from ' is (in distribution) equivalent to uniformly sampling a function h ∈_q, and a sequence N=(N_1, …, N_m) of independent and uniformly distributed random variables in [n], and then returning h' = f'_h,N. To show that h'(i_1),…,h'(i_t) are t-wise independent[Here the term “t-wise independent” has the meaning from probability theory, where it does not imply that the random variables are uniform. We use the term “t-independent” to indicate the meaning in pseudorandomness, where the random variables must additionally be uniformly distributed.], we will show that they are a deterministic function of random variables that are themselves t-wise independent. That is, the pairs (h(i_1),N_i_1), …, (h(i_t),N_i_t) are easily seen to be t-wise independent. We obtain the values h'(i_1),…,h'(i_t) by applying the function f _ q × [n] ×_q → [n], defined by f(x , r, β) = ϕ_β(x) if x ≠β r if x = β. Notice that by definition of f' _ h, N, we have h'(i_j) = f(h(i_j) , N_i_j, b_i_j), ∀ j ∈ [t]. Since f and b are deterministic, it follows that h'(i_1),…,h'(i_t) are t-wise independent. To complete the proof that ' is t, we must also show that each h'(i) is uniformly distributed. To see this, note that for each i ∈ [m] and ν∈ [n] h'(i) = ν  = h(i) = b_i N _ i = ν + h(i) ∈ϕ_b_i (ν)  = 1/q·1/n + (q-1)/nq = 1/n since h(i) is uniformly distributed, and ϕ is a q-1/n-to-1 map. To summarize, for any i_1, …, i_t ∈ [m], h'(i_1), …, h'(i_t) are independent and uniformly distributed random variables over [n]. That is, ' is t. As previously argued, this implies that is t, which is what we intended to prove. BuildOAWorks Let m,n,t be integers with n ≥ 2 and 2 ≤ t ≤ m. Let q be a prime power satisfying q ≡ 1 n. Let C_1 ⊊ C_2 ⊆_q^m be linear codes such that the dual of C_1 has minimum distance at least t+1. Let k be the dimension of C_1 and d_2 be the minimum distance of C_2. Then there is an algorithm to produce an [s,m,n,t] orthogonal array with s = n^m-d_2 q^k. Pick any b ∈ C_2 ∖ C_1. Then Δ(b,u) ≥ d_2 for all u ∈ C_1. Thus we may apply BuildOAWorks, taking C := C_1 and τ := m - d_2. § CONSTRUCTION USING REED-SOLOMON CODES RS In this section, we instantiate the abstract bound using Reed-Solomon codes. RSOA There exists a universal constant c>0 such that the following is true. For any integers n , m , t ≥ 2 with t ≤ m, there exists an explicit [s,m,n,t] orthogonal array with s  ≤  n^t ·c(n + m)^ 5t. Assuming the generalized Riemann hypothesis, s  ≤  n^t ((n+m) ln(n+m))^2t. This bound matches (up to constants in the exponent) the non-constructive bound in the conference article of Kuperberg, Lovett and Peled <cit.>. This bound is not near-optimal (i.e., (mn/t)^O(t)) for all parameters, but it is in many regimes. (For example, it is near-optimal if t ≤minn,m, or t ≤ m^0.99, or n ≥ m^0.01. Also, if t = Ω(m), then the trivial bound of s ≤ n^m is near-optimal.) AG presents an improved construction that is near-optimal for all parameters, matching the non-constructive results of the journal article <cit.>. The prime. The first step of the proof is to find a prime p satisfying p ≡ 1 n and p > m. We will use the following result. Linnik There exist universal constants L, c_L > 0 such that the following is true. For any integer n ≥ 2, and any positive integer a coprime with n, there exists a prime p satisfying (i) p ≡ a n, and (ii) p ≤ c_L · n^L. The constant L is known as Linnik's constant. Although Linnik in 1944 did not provide an explicit value for L, the most recent developments. have shown that L ≤ 5.5 in 1992 <cit.>, L < 5.2 in 2011 <cit.>, and L ≤ 5 in <cit.>. Assuming the generalized Riemann hypothesis <cit.>, the conclusion (ii) can be strengthened to p ≤ (n ln n)^2. Linnik For any constants L, c_L > 0 satisfying Linnik's theorem, the following is true. For any integer n ≥ 2 and real m ≥ 2, there exists a prime p satisfying (i) p ≡ 1 n, (ii) p ≤ c_L (n+m)^L, and (iii) m < p. Let η = n ·m / n. Note that m ≤η < m + n. Let p be the prime obtained by Linnik with η instead of n, and with a=1 (which is trivially coprime with n). The theorem ensures that η p - 1; since n η, we have n p - 1, which establishes (i). Since η p - 1 and p-1>0, we must have m ≤η < p, which establishes (iii). Lastly, we have p ≤ c_L ·η^L < c_L · (m + n)^L, which establishes (ii). A prime p satisfying the conditions of Linnik can be found by exhaustive search. Simply test the primality of all integers p satisfying p ≡ 1 η and p ≤ c_L ·η^L, of which there are at most η^L-1. Since the primality of x can be tested in O(log^7 x) time, the overall runtime is O(η^L-1log^7 η) = Õ((n+m)^4). The codes. The next step of the proof is to find codes of appropriate parameters that can be used with BuildOAWorks. Let q equal the prime p chosen above. We will use Reed-Solomon codes in _q^m. The following theorem states their basic properties, with apparently excessive detail, in order to draw a parallel with Goppa below. RS Let m ≤ q. There exists a sequence of linear codes C_0, …, C_m-1⊆_q^m, where C_a has parameters [m, k_a, d_a]_q, such that the following statements hold. C_a ⊊C_a+1 ∀  0 ≤a < m-1 RSDist d_a = m-a ∀  0 ≤a < m RSDim k_a = a+1 ∀  0 ≤a < m RSDual k_a ^ ⊥+ d_a ^ ⊥= m + 1   ∀  0 ≤a < m Here k_a^⊥ and d_a ^ ⊥ respectively denote the dimension and distance of C_a^⊥, the dual of C_a. Proofs of the claims in this theorem may be found in standard references, e.g., <cit.> <cit.> <cit.>. We will apply BuildOAWorks to the codes C_t-1⊊ C_t. It follows from (<ref>) that m+1  =  k_t-1^⊥+d_t-1^⊥ =  (m-k_t-1) + d_t-1^⊥, and therefore d_t-1^⊥ = k_t-1+1 = t+1. Thus BuildOAWorks yields an [s,m,n,t] orthogonal array with RSSize s  =  n^m-d_t p^k_t-1 =  n^t p^t  ≤  n^t(c_L (n+m)^5)^t. This proves RSOA. § CONSTRUCTION USING ALGEBRAIC GEOMETRY CODES AG In this section we replace the Reed-Solomon codes with algebraic geometry codes, which gives a near-optimal orthogonal array for all parameters. This matches the non-constructive bound shown in the journal article <cit.>, and proves Main. AGOA There exists a universal constant c>0 such that the following is true. For any integers n , m , t ≥ 2 with t ≤ m, there exists an [s,m,n,t] orthogonal array with s  ≤  n^5t(c(n+11+m/t)^5)^6t. Moreover, this array can be constructed by a deterministic algorithm in (sm) time. To motivate the proof, let us reflect on the construction of the previous section. A standard bound on all [m,k,d] linear codes is the Singleton bound, which states that k + d ≤ m + 1; if equality holds it is called an MDS code. In RS, to apply BuildOAWorks we needed d_t-1^⊥≥ t+1. Furthermore, to obtain an exponent of O(t) in (<ref>), we wanted d_t ≥ m - O(t) and k_t-1=O(t). Since Reed-Solomon codes and their duals are MDS codes, we have exactly d_t-1^⊥=t+1, d_t = m-t and k_t-1=t. This yields an orthogonal array with s = n^t p^t rows, with the main shortcoming being the undesirably large field size of p = (n+m). The construction of this section attains an improved bound by reducing the field size to (n+m/t). We cannot hope to use MDS codes anymore, since it is believed that they do not exist when the field size is less than m-1. Instead, we will use codes that only approximately satisfy the Singleton bound, but do so over a much smaller field. Algebraic geometry codes (AG codes) provide a tradeoff suitable for our purposes. Algebraic geometry codes. These are a general class of codes that involve evaluations of rational functions over algebraic curves <cit.>. Reed-Solomon codes are a simple special case using evaluations of polynomials. We begin with a detailed statement of the codes that can be constructed from a general curve. Let q be a prime power. Let X be a curve over _q (see <cit.>). Let g be its genus (see <cit.>). Let N be its number of rational points (see <cit.>, <cit.>, or <cit.>). Goppa Let m < N. There exists a sequence of linear codes C_0, …, C_m-1⊆_q^m, where C_a has parameters [m, k_a, d_a]_q, such that the following statements hold. [ C_a ⊆ C_a+1 ∀  0 ≤ a < m-1; <cit.> ] GoppaDist[ d_a ≥ m-a ∀  0 ≤ a < m; <cit.> ] GoppaDim[ k_a ≥ a-g+1 ∀  0 ≤ a < m,; and equality occurs for a ≥ 2g-1; <cit.> ] GoppaDual[ k_a ^ ⊥ + d_a ^ ⊥≥ m -g + 1   ∀  0 ≤ a < m; <cit.> ] Naively one might imagine that C_a ⊊ C_a+1 for all a. However that is not necessarily the case since (<ref>) does not exactly specify the dimension: for small a it only provides a lower bound. However, this will not be problematic for our purposes. We will only consider C_a for a ≥ 2g-1, in which case (<ref>) determines the dimension exactly. Clearly, for a curve of genus 0, each code C_a must be an MDS code, since adding (<ref>) and (<ref>) shows that the Singleton bound is tight. More generally, the AG code construction is most efficient when it is based on a curve for which g/N is as small as possible. However this ratio cannot be arbitrarily small. It is known[This follows from the Drinfeld-Vlăduţ theorem, but is weaker since we do not need exact constants.] that g/N ≥ c/√(q) for some constant c>0 and sufficiently large g. In fact, this bound is asymptotically tight. Various explicit curves asymptotically matching this bound have been discovered by Tsfasman, Vlăduţ and Zink <cit.>, Garcia and Stichtenoth <cit.>, and others. Using such curves, and letting m = Ω(N), one may see from (<ref>) and (<ref>) that the Singleton bound nearly holds, but there is a “defect” of g=O(m/√(q)). The rough idea of our analysis is to choose parameters such that √(q) = Θ(m/t), so that the defect is only O(t), which will be acceptable for our purposes. There is a slight complication. The use of algebraic curves in coding theory primarily concerns limiting behaviour of the rate/distance tradeoff as the code length m tends to infinity. Since we wish to construct orthogonal arrays for all parameters m,n,t, we must ensure that curves exist with values of N that are sufficiently dense. It is not necessarily true that all curve families that have previously been used with AG codes will be suitable for our purposes. Fortunately, the modular curves discussed in the following theorem are suitably dense. KTV Let p ≠ℓ be distinct primes with p ≠ 11. Then the classical modular curve X_0(11ℓ) over _p^2 has at least N= (p-1)(ℓ+1) rational points and has genus g = ℓ. Furthermore, the generator matrix of the codes made from such curves can be constructed in time (m). We remark that the celebrated work of <cit.> also used the classical modular curve X_0(·) over _p^2, but they did not require the genus to be a multiple of 11. Using AG codes for orthogonal arrays. Next we explain how the AG codes can be used to construct orthogonal arrays. Let p be the smallest prime satisfying p ≡ 1 n and p ≥ 11 + m/t. Clearly p ≠ 11. By Linnik's theorem (Linnik), p ≤ c_L (n+11+m/t)^5. Let q = p^2 and note that q ≡ 1 n. We choose ℓ to be the smallest prime such that ℓ≥ t; by Bertrand's postulate ℓ≤ 2t. By KTV, the classical modular curve X_0(11ℓ) over _q satisfies ClassicalModular N = (p-1)(ℓ+1) >  m and g = ℓ≤ 2t. Given this curve, Goppa guarantees existence of a sequence of codes with various parameters. We will only use the codes C_u and C_u+1, where UDef u  =  2ℓ - 1 + t  =  2g -1 + t. The following claim performs some simple calculations in preparation for using BuildOAWorks. * C_u ⊊ C_u+1. * d_u^⊥≥ t+1. * k_u ≤ 3t. * d_u+1≥ m - 5t. * By (<ref>) we have u > 2g-1, so (<ref>) holds with equality for a ∈u,u+1, and therefore k_u < k_u+1. * d_u^⊥≥ m - k_u^⊥ - g + 1 = k_u - g + 1 ≥ u - 2g + 2 = t+1, by (<ref>), (<ref>), and (<ref>). * By (<ref>) we have u > 2g-1, so (<ref>) holds with equality for a=u. Thus k_u  =  u-g+1  =  (2g-1+t)-g+1  ≤ 3t, by (<ref>), (<ref>), and (<ref>). * d_u+1≥ m-u-1 = m-(2g-1+t)-1 ≥ m-5t, by (<ref>), (<ref>), and (<ref>). We apply BuildOAWorks to C_u and C_u+1 with k ≤ 3t and d_2 ≥ m-5t. Thus, there is an [s,m,n,t] orthogonal array with s  ≤  n^5t q^3t ≤  n^5t(c(n+11+m/t)^5)^6t. Regarding the algorithmic efficiency, KTV above states that the generator matrix for C_u and C_u+1 can be constructed in time (m). Given those matrices, a vector v ∈ C_u+1∖ C_u (as required by BuildOAWorks) can be found in (sm) time. Since BuildOA2 also runs in (sm), it follows that the construction of BuildOAWorks runs in (sm). This concludes the proof of AGOA. § IMPLEMENTING A T HASH FUNCTION Hash Although orthogonal arrays and t hash functions are mathematically equivalent (see Equiv), a key difference is the efficiency of their implementations. A hash function needs to be represented in little space, and the algorithms for constructing and evaluating it should be fast. In hash_alg, we present pseudocode for a t hash function based on the orthogonal array construction of Sections <ref> and <ref>. We assume that (S) returns a uniformly random element of the finite set S in O(1) time. BasicHash In the pseudocode of hash_alg: * has expected runtime O(t+(n+m)^) for every >0. * has runtime O(t). * The space for a object is O(t log(n+m)) bits. Assuming the generalized Riemann hypothesis, the time for is O(t + log(n+m)). The proof involves the following notation, for x ∈, η, a ∈, η≥ 2, S_x,η,a  =  i ∈i ≡ a η , 1 < i ≤ x xηa  = p ∈ S_x,η,ap is prime as well as the Euler totient function ϕ(η), which counts the number of positive integers up to η that are relatively prime to η. Throughout this section, we assume that a and η are coprime. It is not difficult to see that this pseudocode implements a t hash function, since it implements the construction of Sections <ref> and <ref>. We now summarize the main ideas. The key task of the is to find a prime p satisfying p ≡ 1 n and η+1 ≤ p ≤η^ν, for some constant ν defined in q_linnik. The runtime of that step is discussed extensively below. It then randomly generates the coefficients of a uniformly random degree-(t-1) polynomial, which is analogous to uniformly selecting the function h ∈_q in the proof of BuildOAWorks. (Here q=p.) The “bad vector” b is defined to be the evaluations of the degree-t monomial, i.e., b(x) = x^t. This is a codeword in C_t but not in C_t-1, and so it can be used as in the proof of RSOA. The function calculates h(x), and checks whether it equals the forbidden value b(x). If so, it replaces it with a random value and caches that in the dictionary D. Otherwise it returns ϕ_b(x)(h(x)). We claim that there are at most t inputs x ∈ [m] for which ∑_i=0^t-1 a_i x^i = x^t (i.e., h(x)=b(x)). This follows since the polynomial h(x) - b(x) is of degree t and hence has at most t roots. This implies that the dictionary D will have size at most t. Each key in the dictionary is in [m] and each value is in [n], so the space required for D is O(t log(n+m)) bits. Since p ≤ (n+m)^ν, and each a_i ∈_p, the space required for all other parameters of the object is O(t log(n+m)) bits. can clearly be implemented in O(t) time, even if the dictionary D is implemented as an unsorted array. The main challenge of the analysis is the time required to find the prime p. In RS the time to find the prime p was negligible compared to the time to build the orthogonal array, so a (deterministic) algorithm with runtime O((n+m)^4) was sufficient. To obtain an improved runtime for , we will use random sampling. q_linnik below shows that there exists a constant ν such that, assuming x ≥η^ν, we have π(x; η, a)  ≥ c_ x/ϕ(η) ln(x) η^ ≥ c_ x/η^1+ln x, using the trivial bound ϕ(η) ≤η. Let us now fix x = η^ν and a=1. The algorithm repeatedly samples integers at random from the set S_x,η,a, testing each for primality. The expected number of iterations until finding a prime is NumItersS_x,η,a/xηa ≤ x/η/c_ x/(η^1+ln x) = η^ln x/c_, since x ≥η. Each primality test requires O((x)) time. Using that η≤ (n+m), we have shown that the expected time to find the prime p is O((n+m)^) for every ϵ > 0. Assuming the generalized Riemann hypothesis, IK yields xηa ≥ x/2 ηln x, so the expected number of iterations until finding the prime p is S_x,η,a/xηa ≤ 2x/η/x/ηln x =  2 ln(x), which is O(log(n+m)) since x=η^ν. The next theorem follows from known results in the analytic number theory literature, as we explain below. q_linnik There exists a universal constant ν≥ 1 (independent of η,) and a constant c_ > 0 such that, for any > 0, any x ≥η^ν, xηa ≥  c_x/ϕ(η) ln x·1/η ^ . The distribution of the primes in an arithmetic progression is closely tied to the well-studied L-functions of characters mod η. As with many results in analytic number theory, there are different cases depending on whether the L-functions have so-called “exceptional zeros”. Whether or not these zeros exist is unknown, and relates to the generalized Riemann hypothesis. We will use the following result, which is a corollary of <cit.>. IK There exist universal constants ν, c_1, c_2, c_3 (independent of x, η, a) such that, if x ≥η^ν and η≥ c_3, then the following holds. * If there exists a real character χη whose L-function has a real zero β satisfying 1 - β≤ c_1 / 2 logη then xηa ≥  c_2 ·x/ϕ(η)· (1-β). * If there exists no such character χ, then xηa ≥ 1/2·x/ϕ(η)ln x. The generalized Riemann hypothesis implies that the second case holds. The results in <cit.> are stated in terms of the von Mangoldt function Λ and its sum ψ: Λ(n)  = ln p if n=p^k for a prime p and integer k ≥ 1 0 otherwise. ψ(x;η,a)  = ∑_n ≤ x n ≡ a ηΛ(n). IK follows from those results using the simple inequality ψ(x;η,a) ≤xηa·ln(x). Let us now return to the proof of q_linnik. If the second case of IK applies, then q_linnik is immediate. If the first case of IK applies, then we must ensure that the exceptional zero is significantly less than 1. The following is a classic result of Siegel <cit.> <cit.>. Siegel For every > 0 there exists a constant c_ such that, if χ is a real characterη with L(s , χ) having a real zero β then β <  1 - c_η ^ - . Thus, in the first case of IK, it holds that 1-β≥ c_η^-, which completes the proof of q_linnik. The numerical value of constants ν, c_L, L, c_1, c_2, c_3 satisfying the above properties can be explicitly computed “given the time and will”; see <cit.>. Such constants are called “effectively computable” in the number theory literature. In contrast, the value of c_ is “ineffective” for < 1/2 <cit.>, meaning that the proof of the result does not yield a way to compute the constant. However, note that implementing Algorithms <ref> and <ref> only requires knowing the value of the constants ν, c_L, L. The constant c_ only appears in the analysis of BasicHash. Specifically, from (<ref>), it appears inside the O(.) bound on the runtime of . § CONSTRUCTION USING RANDOM LINEAR CODES GV Sections <ref> and <ref> use deterministic code constructions. In this section we show that a random construction of linear codes can also be used to prove an non-explicit form of Main, using an efficient, randomized algorithm. MainOptimal There exists a universal constant c > 0 such that the following is true. For any m , n , t ∈, there exists a prime number p, a linear code C ⊆_p^m and a function b [m] → _ p such that * p ≡ 1 n and p ≤ c_L (n+(me/t)^3)^L. * the dimension of C is at most 2t. * the distance of C^⊥ is at least t+1. * Δ(b,v) ≥ m-3t  ∀ v ∈ C. Moreover, there exists a randomized algorithm that outputs p, C and b satisfying these conditions and has expected runtime (mn/t)^O(t). The code C and vector b are then provided as input to BuildOA2. Applying BuildOAWorks, we obtain an [s,m,n,t] orthogonal array with s ≤ n^3t p^2t = (mn/t)^O(t). The expected runtime is (mn/t)^O(t). The proof of MainOptimal requires the following lemma. Here, the matrix M is simply the generator matrix for the dual of a code meeting the Gilbert–Varshamov bound. A non-algorithmic form of this lemma may be found in <cit.>. AlgGV Let m, t ∈ and let p be a prime power. Assume that GVCondition∑_i=1^t mi (p-1)^i  ≤  p^ℓ/4. Then there exists a randomized algorithm that, with probability at least 3/4, outputs an ℓ× m matrix M such that any t columns of M are linearly independent. The proof is classical, but for completeness we include it below. MainOptimal Define ℓ = 2t and s = 3t. Let p be a prime satisfying both p ≡ 1 n and me/t ^ 3 < p. By Linnik, there exists such a p with BoundOnP p  ≤  c_L (n + (me/t)^3 )^L. We can find p deterministically with runtime (cnm/t)^c for some constant c. Next we claim that (<ref>) is satisfied. Using t ≤ (m-1)/2 and p > 2, the LHS is at most t mt p^t. Dividing both sides by p^ℓ, we get t mt p^t-ℓ  ≤  t (m e / t)^t p^-t (using ℓ = 2t)  <  t (m e/t)^-2t (using the lower bound on p)  <  t e^-4t ≤  1/4e, using that m ≥ e · t. Since this is less than 1/4, (<ref>) is satisfied. We will repeatedly use AlgGV to generate a matrix M until all subsets of t columns are linearly independent. This takes time O(mt (mt)^c) for some constant c. Let C be the list of vectors in _p^m obtained by taking all linear combinations of the rows of M. Then C is a linear code of dimension at most ℓ and dual distance at least t+1; see, e.g., <cit.>. The time to construct C is O(C m ℓ). Lastly, we will repeatedly generate b ∈_p^m uniformly at random until Δ(b,v) ≥ m-s  ∀ v ∈ C. For any I ∈[m]s, let E_I be the event that there exists v for which v_i=b_i  ∀ i ∈ I. We will show that _I E_I < 1. Clearly _I ≤ C· p^-s ≤  p^ℓ-s =  p^-t. Thus, by a union bound, _I E_I  ≤ ms p^-t  ≤  (m e / s)^s p^-t  =  3^-3t (m e / t)^3t p^-t  <  3^-3t <  1, by the definition s = 3t and the lower bound p > (m e / t)^3. Thus, the expected number of trials until generating b is a constant. Each trial can be executed in time C m. Overall, the expected runtime is (mn/t)^O(t). AlgGV Let M be a uniformly random matrix of size ℓ× m. Let V ⊆_p^m be the non-zero vectors of support size at most t. For v ∈ V, let E_v be the event that M v=0. Then E_v = p^-ℓ, so _v ∈ V E_v  ≤ V p^-ℓ = ∑_i=1^t mi (p-1)^i p^-ℓ. By the lemma's hypothesis, this is at most 1/4. If this event does not occur then every linear dependence among the columns of M involves more than t columns. § OPEN QUESTIONS There are several open questions related to this work. * The orthogonal array constructed by Main will have duplicate rows, in general. (Similarly, the hash family constructed by MainHash will, in general, be a multiset.) Can it be modified to eliminate the duplicate rows? * In BasicHash, the expected runtime of is O(t+(n+m)^)  ∀>0. Can it be improved to O(t)+(n+m), without assuming the generalized Riemann hypothesis? * The hash function construction of Hash was based on the Reed-Solomon construction of RS. Can the construction of AG be used instead? Although there are algorithms to construct the generator matrix in time (m) (see KTV), we would like to evaluate the hash function in time (t) while using only space O(t log(n+m)). At present we are unaware of an AG code construction that can be used for this purpose. * Although there are explicitly known values of L for which Linnik's theorem (Linnik) holds, at present it seems that there is no explicitly known pair (L,c_L) such that the theorem holds. § ACKNOWLEDGEMENTS We thank Jan Vondrak for suggesting this problem to us and for some initial discussions. We thank Lior Silberman and Greg Martin for very helpful discussions on analytic number theory. plain
http://arxiv.org/abs/2405.10016v1
20240516115917
Toward double copy on arbitrary backgrounds
[ "Anton Ilderton", "William Lindved" ]
hep-th
[ "hep-th" ]
=1
http://arxiv.org/abs/2405.09834v1
20240516061652
Topological Floquet engineering of a three-band optical lattice with dual-mode resonant driving
[ "Dalmin Bae", "Junyoung Park", "Myeonghyeon Kim", "Haneul Kwak", "Junhwan Kwon", "Yong-il Shin" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas" ]
Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea yishin@snu.ac.kr Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea Institute of Applied Physics, Seoul National University, Seoul 08826, Korea We present a Floquet framework for controlling topological features of a one-dimensional optical lattice system with dual-mode resonant driving, in which both the amplitude and phase of the lattice potential are modulated simultaneously. We investigate a three-band model consisting of the three lowest orbitals and elucidate the formation of a cross-linked two-leg ladder through an indirect interband coupling via an off-resonant band. We numerically demonstrate the emergence of topologically nontrivial bands within the driven system, and a topological charge pumping phenomenon with cyclic parameter changes in the dual-mode resonant driving. Finally, we show that the band topology in the driven three-band system is protected by parity-time reversal symmetry. Topological Floquet engineering of a three-band optical lattice with dual-mode resonant driving Y. Shin May 20, 2024 =============================================================================================== § INTRODUCTION Ultracold atoms in optical lattices provide a flexible platform to explore topological insulators and associated phenomena, facilitated by the ability to adjust the lattice configuration experimentally <cit.>. Periodic time-dependent modulation techniques, also known as Floquet engineering, have been established as an effective method to examine topological bands within these systems. Tailored modulations of the lattice have successfully produced nontrivial bands with novel topological characteristics <cit.>, which have led to the observation of many interesting phenomena, including topological charge pumping <cit.>. Floquet band engineering has thus become a prominent path in the field of optical lattice research. Researchers have extensively studied topological bands in one-dimensional (1D) optical lattices to gain essential insight into topological matter. As a minimal representation for 1D topological insulators, in particular, a cross-linked two-leg ladder system or similar models have been investigated <cit.>. As illustrated in Fig. <ref>(a), the ladder system is composed of two lines of lattice sites called legs, and the legs are interconnected both vertically and diagonally, representing the hopping between sites. The diagonal cross-links give rise to topological features in the system. In experimental setups, the legs can be assigned to different spin states of atoms or different orbitals in the lattice, with the cross-linking provided by spin-orbit coupling or band-mixing processes, respectively. In recent experiments, a cross-linked two-leg ladder system employing s and p orbitals was implemented successfully using a two-tone driving scheme <cit.>, where the optical lattices were shaken resonantly with two frequencies, and the cross links were produced by two-photon resonant interband coupling <cit.>. Furthermore, the ability to dynamically adjust the linking properties enabled the demonstration of topological charge pumping <cit.>. In this work, we propose an alternative Floquet approach to construct a tunable cross-linked two-leg ladder system. Our approach features creating the ladder with s and d orbitals, which share the same parity, and using both the amplitude and phase modulations of the lattice potential at an identical frequency. When the modulation frequency is set close to the energy gap between the s and d bands, the amplitude modulation (AM) generates the on-site resonant coupling between the s and d orbitals, thus forming the ladder rungs [Fig. <ref>(b)] <cit.>. Meanwhile, the phase modulation (PM), which triggers lattice shaking, does not generate a direct s-d interorbital coupling owing to parity conservation; however, it establishes diagonal connections through three-photon resonant transitions via p orbital [Fig. <ref>(c)]. This three-photon process represents an indirect resonant interband coupling that employs an off-resonant third band as an intermediate state. To the best of our knowledge, such indirect resonant coupling has not been discussed as an effective interband coupling mechanism in the literature on Floquet band engineering. Owing to the dual-mode driving employing both AM and PM, a cross-linked ladder is formed, comprising two orbitals with identical parity, which is not achievable with the two-tone driving method used in previous studies. Using a three-band model, we numerically demonstrate the topological properties of the 1D optical lattice system subjected to dual-mode resonant driving. We comprehensively analyzed the resultant Floquet bands under a range of driving parameter conditions, including the relative intensity and phase of AM and PM. Our analysis shows the emergence of a topologically nontrivial phase under certain driving conditions, as evidenced by the entanglement entropy and spectrum <cit.>, along with the observation of a topological phase transition. Through numerical simulations, we illustrate a topological charge pumping effect expected during slow cyclic changes in driving parameters <cit.>. Lastly, we elucidate that the topological phases of the Floquet bands in the three-band model are protected by parity-time reversal (PT) symmetry. The remainder of the paper is organized as follows. Sec. <ref> introduces a three-band model of the 1D optical lattice system under dual-mode resonant driving. We further derive an effective two-band description of the system by adiabatic elimination of the off-resonant p band <cit.>, which provides insight into the indirect resonant interband coupling and the topological structure of the driven system. Sec. <ref> presents our numerical results of the quasi-energy and entanglement spectrum of the driven lattice system, and also illustrates the topological charge pumping effect with cyclic parameter changes in the dual-mode resonant driving. Sec. <ref> demonstrates the role of PT symmetry in protecting the topology of the Floquet bands. Finally, Section <ref> provides a summary and some concluding remarks. § DUAL-MODE RESONANT DRIVING OF OPTICAL LATTICE §.§ Three-band model Let us consider a spinless fermionic atom in the driven 1D optical lattice potential V_ lat(x,t), which is given by V_ lat(x,t) = V_L(t) sin^2(π/ax - ϕ(t)), where V_L(t) and ϕ(t) are the amplitude and phase of the lattice potential, respectively, and a is the lattice constant. V_L and ϕ are determined by the parameters of the laser beams involved, such as intensity, polarization, and phase, and can be dynamically controlled for Floquet engineering. The two fundamental modulation approaches are periodically modulating V_L and ϕ in time, which we refer to as AM and PM, respectively [Figs. <ref>(b) and <ref>(c)]. As the position of the lattice site is determined by the phase ϕ(t), PM induces lattice shaking. When viewed from the reference frame comoving with the driven optical lattice, the system's Hamiltonian is described as follows <cit.>: H(x,t) = H_0 + λ(t) V_ stat(x) - F(t) x H_0 = p^2/2m + V_ stat(x), where p is the kinetic momentum of the atom, m denotes its mass, V_ stat(x)=V_0sin^2(π/ax) is the stationary lattice potential, λ(t) denotes the relative variation of lattice amplitude such that V_L(t)=[1+λ(t)]V_0, and F(t)=-m(a/πϕ̈(t)) represents the inertial force resulting from PM. In the tight-binding approximation, the Hamiltonian can be expressed in terms of Wannier states |j,α⟩ localized on lattice site j in the α band, given by <cit.> H(x,t) = ∑_jαϵ_αĉ^†_jαĉ_jα -∑_jlαt^(l)_αe^-ilθ(t)ĉ^†_jαĉ_j+l α +∑_jlαβ(λ(t)u^(l)_αβ-F(t)η^(l)_αβ)e^-ilθ(t)ĉ^†_jαĉ_j+l β, where ĉ^†_jα (ĉ_jα) is the creation (annihilation) operator for the atom in the Wannier state |j,α⟩, ϵ_α = ⟨ j,α |H_0| j,α⟩ represents the on-site energy, and t^(l)_α = -⟨ j,α |H_0| j+l,α⟩ denotes the hopping amplitude between the Wannier states in the α band separated by l lattice sites. In addition, u^(l)_αβ = ⟨ j,α|V_ stat(x)|j+l,β⟩ and η^(l)_αβ = ⟨ j,α|x|j+l,β⟩ correspond to the lattice potential and lattice displacement matrix elements for interorbital transitions separated by l lattice sites, respectively. Lastly, θ(t)=-a/ħ∫_0^t dt' F(t') represents the time-dependent Peierls phase <cit.>. By Fourier transforming this tight-binding model Hamiltonian, we obtain the Bloch Hamiltonian for quasimomentum q in the presence of AM and PM as follows: H(q,t) = ∑_α(ϵ_α-∑_l>02t^(l)_αcos[l(q-θ(t))])ĉ^†_qαĉ_qα +∑_lαβ(λ(t)u^(l)_αβ-F(t)η^(l)_αβ)e^il(q-θ(t))ĉ^†_qαĉ_qβ. Here, q is expressed in units of 1/a. In this work, we consider a model system that includes only the three lowest bands, indexed by α∈{s, p, d}. Considering the lowest-order effects of lattice modulation, the Bloch Hamiltonian of the three-band system is given by H(q,t) = [ ϵ_s'(q,t) -F(t)η^(0)_sp λ(t)u^(0)_sd; -F(t)η^(0)_ps ϵ_p'(q,t) -F(t)η^(0)_pd; λ(t)u^(0)_ds -F(t)η^(0)_dp ϵ_d'(q,t) ] with ϵ_α'(q,t) = ϵ_α-2t^(1)_αcos(q-θ(t))+λ(t)u^(0)_αα. We focus on a case where the system is subjected to dual-mode resonant driving with λ(t)   = λ_0 cos(), ϕ(t)   = ϕ_0 cos(+φ), and the driving frequency ω≈ω_sd = (ϵ_d - ϵ_s)/ħ. Here, λ_0 and ϕ_0 are dimensionless parameters that represent the strengths of AM and PM, respectively, and φ is the relative phase of the two modulation modes. §.§ Effective two-band model When the three-band lattice system is driven with a frequency ω≈ω_sd, the couplings between the p orbital and the others become off-resonant, resulting in the p band being energetically isolated. We can project the three-band system into an effective two-band system using an adiabatic elimination technique <cit.> owing to the minimal involvement of the p band in band mixing. First, let us take a proper rotating frame by applying a unitary transformation of U_R(t)=exp(+i R̂ t) to the Bloch Hamiltonian H(q,t) in Eq. (<ref>), where R̂ = [ -ω + E_0/ħ 0 0; 0 E_0/ħ 0; 0 0 E_0/ħ ] with E_0 = (ϵ_d + ϵ_s + ħω)/2 representing the zero energy point. In the rotating frame, the modified Hamiltonian H'(q,t) is given by H'(q,t) = [ ħδ_s/2 -F(t)η^(0)_spe^i λ(t)u^(0)_sde^i; -F(t)η^(0)_pse^-i -ħΔ_p -F(t)η^(0)_pd; λ(t)u^(0)_dse^-i -F(t)η^(0)_dp -ħδ_d/2 ] with ħδ_s/2 = ϵ_s' + ħω - E_0, ħδ_d/2 = E_0 - ϵ_d', and ħΔ_p = E_0 - ϵ_p'. Note that |Δ_p| ≫ |δ_s|,|δ_d| when the driving frequency is set to ω≈ω_sd, providing a suitable condition for adiabatic elimination of the p band. The energy level structure is depicted in Fig. <ref>(a). It can be viewed as a characteristic V-type system in which the two adjacent upper states are coupled to each other by λ(t) and also to a lower level simultaneously by F(t). For comparison, the Floquet energy diagram of the driven three-band system is illustrated in Fig. <ref>(b). Simplifying the notation of H'(q,t) as H'(q,t) = [ H_00 H_01 H_02; H_10 H_11 H_12; H_20 H_21 H_22 ], the equation of motion for the system state |ψ⟩ = (ρ_s, ρ_p, ρ_d)^T is written by H'(q,t)|ψ⟩ = [ H_00ρ_s + H_01ρ_p + H_02ρ_d; H_10ρ_s + H_11ρ_p + H_12ρ_d; H_20ρ_s + H_21ρ_p + H_22ρ_d ] = iħ[ ρ̇_s; ρ̇_p; ρ̇_d ]. Claiming ρ̇_p=0 owing to the p band being negligibly populated, we obtain ρ_p = -(H_10ρ_s+H_12ρ_d)/H_11. Injecting this relation back into Eq. (<ref>) yields the effective Hamiltonian as H_eff(q,t) = [ H_00-H_01H_10/H_11 H_02-H_01H_12/H_11; H_20-H_21H_10/H_11 H_22-H_21H_12/H_11 ]. The additional terms in the diagonal and the off-diagonal element are proportional to F^2/Δ_p, which represent additive band energy shifts and sd interband couplings, respectively, arising from the off-resonant couplings to the p band. In terms of the Pauli matrices σ={σ_x,σ_y,σ_z}, we obtain the modified effective Hamiltonian as H_ eff' (q,t) = [ (ħδ/2 + 2t_-cos(q -θ_0sin(+φ))) - (λ'_-cos() + F'_-^2cos(2+2φ) + F'_-^2) ] σ_z + (λ'cos(ω t) + F'^2cos(2ω t + 2φ) + F'^2) cos()σ_x - (λ'cos(ω t) + F'^2cos(2ω t + 2φ) + F'^2) sin()σ_y with δ = ω - ω_sd. The definitions of θ_0, λ'_(-), and F'_(-) are listed in Table <ref>. In the derivation of H_ eff', we ignore the trace part of the Hamiltonian, i.e., H_ eff'=H_ eff- tr(H_ eff)/2𝕀, which does not affect the topological properties of the system. Next, we derive the approximated time-independent Hamiltonian H_ eff(q) for H_ eff'(q,t) using the high-frequency expansion method <cit.>. When the Fourier series expansion of H_ eff'(q,t) is given by H_ eff'(q,t)=Σ_m H_m(q)e^imω t, the second-order approximation of H_ eff(q) is given by H_ eff(q) = H_0 + ∑_m>0[H_m, H_-m]/mħω. Neglecting the higher order terms <cit.>, we obtain H_ eff(q) = ( ħδ/2 +2t_-J_0(θ_0)cos(q) -F'_-^2)σ_z -( 6F'^2/ħωt_-J_1(θ_0)sin(q)sin(φ) -λ'/2)σ_x +( 6F'^2/ħωt_-J_1(θ_0)sin(q)cos(φ) )σ_y = [δ' + 2t'_-cos(q)]σ_z + t_vσ_x + 2t_dsin(q)[sin(φ)σ_x - cos(φ)σ_y], where δ' = ħδ/2 - F'_-^2, t'_- = t_-J_0(θ_0), t_v = λ'/2, and t_d = -3F'^2/ħωt_-J_1(θ_0). The final expression of H̃_ eff(q) in Eq. (<ref>) reveals the band topology of the driven lattice system. The terms with t_v and t_d correspond to the vertical and diagonal interleg links in the two-leg-ladder description [Fig. 1(a)]. Notably, t_v ∝λ_0 u_sd^(0) t_d ∝ϕ_0^3 η_sp^(0)η_pd^(0) (t_d^(1)-t_s^(1)), indicating that the vertical links are generated by the on-site one-photon interorbital transition |j,s⟩↔ |j,d⟩, induced by AM, while the diagonal links originate from the three-photon transitions involving site hopping, e.g., |j,s⟩↔ |j,p⟩↔ |j,d⟩↔ |j+1,d⟩, induced by PM. The effective Hamiltonian H_ eff(q) exhibits chiral symmetry at φ=±π/2, as σ_y H_ eff(q) σ_y = -H_ eff(q); this means that the spin states of the bands are restricted to the xz plane, ensuring that the spin winding number across the q space is well-defined and topologically protected by symmetry. At δ'=0, a topologically critical point emerges when t_d=± t_v/2, rendering H_ eff(q=∓π/2)=0. Given the parameters of the optical lattice system at V_0=10 E_R (E_R is the lattice recoil energy), as detailed in Table <ref>, the ratio |t_v/t_d|=2 is achieved when ϕ_0^3/λ_0 = 0.009. This modulation condition is experimentally feasible, for example, with λ_0=0.1 and ϕ_0≈ 0.1, which corresponds to the lattice-shaking amplitude of 0.03 a. Thus far, we have demonstrated that a cross-linked ladder structure can be established in a three-band optical lattice by utilizing dual-mode resonant driving. Developing an effective two-band description, we have clarified the critical role of the off-resonant p band in Floquet engineering, which is essential for determining the topological characteristics of the driven lattice system. In the following section, we will confirm our theoretical findings through a direct numerical simulation of the three-band Hamiltonian H(q,t) in Eq. (<ref>). § FLOQUET STATE ANALYSIS §.§ Quasienergy spectrum We investigate the quasienergy spectrum of the driven three-band optical lattice system in accordance with Floquet theory <cit.>. We numerically calculate the time-evolution operator over one driving period T=2π/ω, defined as Û(t+T,t;q)   = 𝒯 exp[-i/ħ∫_t^t+T H(q,t') dt'] with 𝒯 being the time-ordering operator, and obtain the quasienergy spectrum ε_n(q) by directly diagonalizing Û(t+T,t;q). Here, n=0,1,2 is the Floquet band index and ε_n(q)∈ [-ħω/2,ħω/2) is independent of the choice of time t. In the calculation, we use the parameter values listed in Table <ref> and set the modulation frequency to ω=ω_sd. In Fig. <ref>(a), the quasienergy spectrum is presented for λ_0=0.05, ϕ_0=0.1, and φ=0. The two upper (n=1,2) Floquet bands demonstrate the avoided crossing of the bare s and d bands of the stationary lattice system under the resonant driving, while the lower (n=0) Floquet band is located apart from the upper bands, aligned with the off-resonant p band. In Figs. <ref>(b)–<ref>(d), we plot the fractional weights of the α=s,p,d orbitals in the Floquet Bloch states |ψ_n(q,t)⟩. The Floquet Bloch states are eigenstates of Û(t+T,t;q) such that Û(t+T,t;q)|ψ_n(q,t)⟩ = e^-iε_n(q) T/ħ|ψ_n(q,t)⟩. It is observed that the p orbital contribution is minimal in the upper Floquet bands, as expected from the off-resonance nature of the p band. This observation supports the validity of our use of adiabatic elimination in the previous section. §.§ Topological characteristics To examine the topological characteristics of the driven lattice system, we calculate the Zak phases of the Floquet bands <cit.>, which are defined over the Brillouin zone (BZ) as γ_n(t) = i ∫_BZ dq ⟨ψ_n(q,t) | ∂_q | ψ_n(q,t) ⟩. The numerical results of γ_n(t=0) for ϕ_0=0.1 are illustrated in Fig. <ref>, as a function of the driving parameters λ_0 and φ. It is noted that critical points are found at λ_0=0.082 and φ=±π/2, accompanied by discontinuous changes in γ_1 and γ_2 nearby. The effective two-band model in the previous section predicts the critical points at λ_0=0.071 for δ=0 and ϕ_0=0.1, which is in a good agreement with our numerical observations <cit.>. We note that when φ = ±π/2, the Zak phase takes only the values of zero or π, while the Zak phase continuously varies in the parameter space; this is consistent with the symmetry protection condition discussed in the previous section. Furthermore, we observe that γ_1+γ_2 =0 only for φ=±π/2, i.e., the Zak phase of the lowest (n=0) Floquet band is γ_0≠ 0 for φ≠±π/2 [Fig. <ref>(c)]; this is a characteristic of a three-band system. In Fig. <ref>(d), we show the time evolution of the Zak phases for φ=π/2, revealing that they show quantized values only at t=0 and T/2. For the effective two-band Floquet system, the chiral symmetry is expressed as σ_y H_ eff'(q,t+t_0) σ_y = -H_ eff'(q,-t+t_0) with a proper choice of time frame t_0 <cit.>, and we find that the symmetry condition is satisfied only with φ=±π/2 (mod 2π) at t_0=0 and T/2 (mod T), which is consistent with the times when the Zak phases are well quantized. As another topological characteristic of the system, we examine the entanglement entropy and spectrum <cit.>. For a 1D non-interacting fermionic system, the entanglement entropy S of the many-body ground state |Ψ⟩ is defined as the trace of the reduced density matrix of the system cut exactly in half, given by S = - Tr(ρ_ A logρ_ A), where ρ_ A = Tr_ B |Ψ⟩⟨Ψ|. Here, A and B represent two subsystems obtained by dividing the system in half. The entanglement entropy exhibits a sharp peak when the system undergoes a quantum phase transition <cit.>. Furthermore, the entanglement spectrum ξ, a set of eigenvalues of the reduced matrix, reveals the system's mid-gap states, of which the existence provides evidence of a system's nontrivial topological phase. This phenomenon is analogous to the bulk-edge correspondence observed in edge states <cit.>, and it holds even in the case of Floquet systems <cit.>. Figure <ref> presents our calculation results of the entanglement entropy and spectrum of non-interacting spinless fermions for our three-band system. The many-body ground state |Ψ⟩ is a uniformly filled topological Floquet band, and we choose the n=1 band in Fig. <ref>(a) as our reference state. When φ=π/2, the entanglement entropy exhibits a sharp peak at the critical point as λ_0 varies [Fig. <ref>(a)], indicating a topological phase transition <cit.>. In the entanglement spectrum, we also observe the presence of mid-gap states and their splitting into upper and lower states at the same critical point of λ_0 [Fig. <ref>(b)]. These results are consistent with the Zak phase in the parameter space [Fig. <ref>(b)]. Finally, we remark on the edge states in our system, which are another characteristic of the topological phase <cit.>. In our three-band system, the global bulk gap may not exist because both the s and d bands exhibit a similar curvature tendency, although varying in degree. The absence of the global bulk gap implies that symmetry-protected edge states may not manifest explicitly, which was the case in our numerical investigation. §.§ Topological charge pumping When the driving parameters {λ_0, φ} vary slowly enough compared to the timescale of the driving period T, the system can adiabatically follow the change in driving conditions. In other words, the long-term dynamics of the system is governed by the time-varying effective Hamiltonian, H_eff(q;t) = H_eff(q;{λ_0, φ}) <cit.>. Using this adiabatic following, topological charge pumping can be achieved in a driven lattice system by slowly varying the driving parameters around a topological singular point, as demonstrated in recent experiments <cit.>. Given its experimental relevance, we numerically investigate the topological charge pumping effect in the driven three-band system. A pumping protocol is considered, where the driving parameters slowly revolve around a singular point in the parameter space with the pumping cycle time T_p, i.e., λ_0(t)   = 0.1 - 0.025cos(2π t/T_p), φ(t)   = φ_0 + 0.5sin(2π t/T_p) with φ_0 = π/2. The system undergoes a 2π change in the Zak phase for each cycle, leading to a charge transport in which all atoms are shifted by one lattice site. Note that this phenomenon only occurs when the trajectory of the driving parameters encircles the singular point in the parameter space, regardless of the specific details of the pumping protocol used to modulate the driving parameters <cit.>; this is why this charge pumping phenomenon is a topological one. In the numerical simulation, the system is initially prepared in an insulating state of the Flquet band and the amount of pumped charge is calculated as C(t) = ∫_0^t dt'j(t'), where j(t) is the charge current given by j(t) = 1/2π∫_BZ⟨ψ(q,t)|v(q,t)|ψ(q,t)⟩ with velocity operator v(q,t) = ∂ H(q,t)/ ∂ (ħ q) <cit.>. The time evolution of the system state |ψ (q,t)⟩ is calculated directly from its time-dependent Shrödinger equation i ∂_t |ψ(q,t)⟩ = H(q,t)|ψ(q,t)⟩, including the cyclic modulations of the driving parameters. In Fig. <ref>(a), the pumped charge C(t) is displayed as a function of time for various pumping parameter conditions. We observe that when the change of driving parameters is slow enough, C(t) increases (decreases) by unity in every pumping cycle for the n=2 (n=1) Floquet band. The observed timescale for the adiabaticity of the charge pumping process is T_p ≈ 100T, attributed to the local gap between the n=1 and n=2 Floquet bands, estimated as ≈ 0.01ħω [Fig. <ref>(a)]. Furthermore, we confirm that if the trajectory of the driving parameters, such as the case of φ_0 = 0 in Eq. (<ref>), does not encircle any topological singular point in the parameter space, then the charge transport does not occur [Fig. <ref>(b)]. The middle inset of Fig. <ref>(a) shows the evolution of the entanglement spectrum of the driven lattice system during one pumping cycle, T_p. As expected, the mid-gap states propagate like edge modes in the bulk gap <cit.>. § SYMMETRY IN THREE-BAND MODEL As predicted in the effective two-band model discussed in Sec. <ref> and verified numerically in the preceding section, topological phases arise in the driven three-band system at φ = ±π/2. Given that φ = ±π/2 establishes the relationship H(x,t)=H(-x,-t) in Eq. (<ref>), we propose that PT symmetry 𝒫̂𝒯̂ (x,t) → (-x,-t) is the symmetry that protects the topological phases in this driven system. The topological phases protected by PT symmetry were recently discussed in <cit.> <cit.>. In this section, we discuss the symmetry protection of the three-band system. If the Floquet Hamiltonian, which is defined as H_F(q,t) = iħ/Tln[U(t+T,t;q)], exhibits PT symmetry, it should satisfy the relation of U_PT^†H_F(q,t+t_0)^∗ U_PT = H_F(q,-t+t_0), where U_PT is a unitary matrix defined as U_PT = [ 1 0 0; 0 -1 0; 0 0 1 ] for a non-interacting spinless fermionic system <cit.>. Here, t_0 is the preferred time frame for the Floquet Hamiltonian Ĥ_F(q,t) to exhibit PT symmetry and in our system, t_0=0 for φ=±π/2. We consider the situation at t=0 and omit the time notation in the following. On the orbital basis |α⟩, the Floquet state |ψ_n(q)⟩ is expressed as |ψ_n(q)⟩ = ∑_αρ_nα |α⟩ = ∑_α |ρ_nα|e^iΘ_nα|α⟩, where ρ_nα is a complex function defined on q, and Θ_nα is the argument of ρ_nα. Then, the PT symmetry condition of H_F(q) in Eq. (<ref>) requires U_PT|ψ_n(q)⟩^∗ = e^i ϑ_n |ψ_n(q)⟩, i.e., [ ρ_ns^∗; -ρ_np^∗; ρ_nd^∗ ] = e^i ϑ_n[ ρ_ns; ρ_np; ρ_nd ] with ϑ_n being a real function of q. This requirement can be encapsulated in two relations: (I)   2Θ_ns = 2Θ_nd         ( mod 2π) (II)   2Θ_np = 2Θ_ns + π   ( mod 2π). Here, we choose a gauge of |ψ_n(q)⟩ for Θ_np to be π/2 and then, under this gauge fixing, ρ_np is imaginary and ρ_ns and ρ_nd are real-valued. The constraints on |ψ_n(q)⟩ due to PT symmetry significantly affect the Zak phase of the Floquet band. Using Eq. (23), the Zak phase is expressed as γ_n = i ∫_BZ dq ⟨ψ_n(q) | ∂_q | ψ_n(q) ⟩ = - ∑_α∫_BZ |ρ_nα|^2 dΘ_nα. This expression shows that γ_n can be interpreted as twice the sum of the areas of the closed loops traced by ρ_nα on the complex plane. When PT symmetry is present, the enclosed area traced by ρ_nα becomes zero in general because ρ_np is confined to the imaginary axis and ρ_ns (ρ_nd) to the real axis. Thus, the topological phase of the Floquet band is trivial with γ_n=0. However, in a special situation where ρ_np becomes zero at q=q_0, the second relation in Eq. (25) is not necessarily required so that ρ_ns and ρ_nd can have complex values even with the fixed gauge of Θ_np=π/2; this means that as q passes through q_0, ρ_ns and ρ_nd can trace paths on the complex plane and return to the real axis. In the trace, the angle between ρ_ns and ρ_nd must be maintained because of the first relation in Eq. (25). Then, in the vicinity of q=q_0, Θ_ns and Θ_nd have identical variations of ΔΘ=0 or π (mod 2π), and it results in γ_n = - (|ρ_ns(q_0)|^2 + |ρ_nd(q_0)|^2) ΔΘ = 0 or π (mod 2π), where we use the normalization condition of |ψ_n(q_0)⟩. Consequently, the PT symmetry requires the quantization of the Zak phase, thus protecting the topological phases of the three-band system. § SUMMARY We introduced a Floquet framework for controlling the topological features of a 1D optical lattice system with dual-mode resonant driving. We investigated a three-band model for the three lowest orbitals, clarifying how a cross-linked ladder forms via indirect interband coupling mediated by an off-resonant band. We provided numerical evidence for the appearance of topologically nontrivial bands in the driven system in conjunction with a phenomenon of topological charge pumping due to cyclic changes in parameters within the dual-mode resonant driving. Furthermore, we examined the role of PT symmetry in protecting the band topology. The dual-mode resonant driving approach facilitates the hybridization of s and d orbitals with the same parity, which leads to the formation of topological bands that exhibit minimal or absent bulk gaps; this method might be used to explore the physics of topological semimetals <cit.>. Moreover, given the unique driving mechanism relative to previous studies on shaken lattices, our dual-mode approach may provide valuable insights into the reduction of heating effects in the Floquet engineering of optical lattices <cit.>. This work was supported by the National Research Foundation of Korea (Grants No. NRF-2023M3K5A1094811 and No. NRF-2023R1A2C3006565). PhysRevLett.111.185301 M. Aidelsburger, M. Atala, M. Lohse, J. T. Barreiro, B. Paredes, and I. Bloch, Realization of the hofstadter hamiltonian with ultracold atoms in optical lattices, Phys. Rev. Lett. 111, 185301 (2013). Jotzu2014 G. Jotzu, M. Messer, R. Desbuquois, M. Lebrat, T. Uehlinger, D. Greif, and T. Esslinger, Experimental realization of the topological haldane model with ultracold fermions, Nature 515, 237 (2014). Goldman2016 N. Goldman, J. C. Budich, and P. Zoller, Topological quantum matter with ultracold gases in optical lattices, Nat. Phys. 12, 639 (2016). RevModPhys.89.011004 A. Eckardt, Colloquium: Atomic quantum gases in periodically driven optical lattices, Rev. Mod. Phys. 89, 011004 (2017). RevModPhys.91.015005 N. R. Cooper, J. Dalibard, and I. B. Spielman, Topological bands for ultracold atoms, Rev. Mod. Phys. 91, 015005 (2019). PhysRevB.82.235114 T. Kitagawa, E. Berg, M. Rudner, and E. Demler, Topological characterization of periodically driven quantum systems, Phys. Rev. B 82, 235114 (2010). PhysRevLett.109.145301 P. Hauke, O. Tieleman, A. Celi, C. Ölschläger, J. Simonet, J. Struck, M. Weinberg, P. Windpassinger, K. Sengstock, M. Lewenstein, and A. Eckardt, Nonabelian gauge fields and topological insulators in shaken optical lattices, Phys. Rev. Lett. 109, 145301 (2012). https://doi.org/10.1002/pssr.201206451 J. Cayssol, B. Dóra, F. Simon, and R. Moessner, Floquet topological insulators, Phys. Status Solidi 7, 101 (2013). PhysRevX.4.031027 N. Goldman and J. Dalibard, Periodically driven quantum systems: Effective hamiltonians and engineered gauge fields, Phys. Rev. X 4, 031027 (2014). PhysRevA.89.061603 W. Zheng and H. Zhai, Floquet topological states in shaking optical lattices, Phys. Rev. A 89, 061603(R) (2014). doi:10.1080/00018732.2015.1055918 L. D. Marin Bukov and A. Polkovnikov, Universal high-frequency behavior of periodically driven systems: from dynamical stabilization to floquet engineering, Adv. Phys. 64, 139 (2015). Kang_2020 J. H. Kang, J. H. Han, and Y. Shin, Creutz ladder in a resonantly shaken 1D optical lattice, New J. Phys. 22, 013023 (2020). PhysRevLett.129.053201 J. Minguzzi, Z. Zhu, K. Sandholzer, A.-S. Walter, K. Viebahn, and T. Esslinger, Topological pumping in a floquet-bloch band, Phys. Rev. Lett. 129, 053201 (2022). PhysRevA.106.L051301 Y.-P. Wu, L.-Z. Tang, G.-Q. Zhang, and D.-W. Zhang, Quantized topological anderson-thouless pump, Phys. Rev. A 106, L051301 (2022). Citro2023 R. Citro and M. Aidelsburger, Thouless pumping and topology, Nat. Rev. Phys. 5, 87 (2023). Walter2023 A.-S. Walter, Z. Zhu, M. Gächter, J. Minguzzi, S. Roschinski, K. Sandholzer, K. Viebahn, and T. Esslinger, Quantization and its breakdown in a hubbard–thouless pump, Nat. Phys. 19, 1471 (2023). PhysRevLett.83.2636 M. Creutz, End states, ladder compounds, and domainwall fermions, Phys. Rev. Lett. 83, 2636 (1999). PhysRevA.89.023619 D. Hügel and B. Paredes, Chiral ladders and the edges of quantum hall insulators, Phys. Rev. A 89, 023619 (2014). PhysRevX.7.031057 J. Jünemann, A. Piga, S.-J. Ran, M. Lewenstein, M. Rizzi, and A. Bermudez, Exploring interacting topological insulators with ultracold atoms: The synthetic creutz-hubbard model, Phys. Rev. X 7, 031057 (2017). PhysRevResearch.4.013056 K. Sandholzer, A.-S. Walter, J. Minguzzi, Z. Zhu, K. Viebahn, and T. Esslinger, Floquet engineering of individual band gaps in an optical lattice using a two-tone drive, Phys. Rev. Res. 4, 013056 (2022). PhysRevA.90.051601 S.-L. Zhang and Q. Zhou, Shaping topological properties of the band structures in a shaken optical lattice, Phys. Rev. A 90, 051601(R) (2014). PhysRevA.102.063315 J. H. Kang and Y.-i. Shin, Topological floquet engineering of a one-dimensional optical lattice via resonant shaking with two harmonic frequencies, Phys. Rev. A 102, 063315 (2020). SträterEckardt+2016+909+920 C. Sträter and A. Eckardt, Interband heating processes in a periodically driven optical lattice, Z. Naturforsch. A 71, 909 (2016). Cabrera-Gutiérrez2019 C. Cabrera-Gutiérrez, E. Michon, M. Arnal, G. Chatelain, V. Brunaud, T. Kawalec, J. Billy, and D. Guéry-Odelin, Resonant excitations of a bose einstein condensate in an optical lattice, Eur. Phys. J. D 73, 170 (2019). PhysRevLett.101.010504 H. Li and F. D. M. Haldane, Entanglement spectrum as a generalization of entanglement entropy: Identification of topological order in non-abelian fractional quantum hall effect states, Phys. Rev. Lett. 101, 010504 (2008). PhysRevB.82.241102 A. M. Turner, Y. Zhang, and A. Vishwanath, Entanglement and inversion symmetry in topological insulators, Phys. Rev. B 82, 241102(R) (2010). PhysRevB.83.245132 T. L. Hughes, E. Prodan, and B. A. Bernevig, Inversion-symmetric topological insulators, Phys. Rev. B 83, 245132 (2011). PhysRevB.94.205422 D. J. Yates, Y. Lemonik, and A. Mitra, Entanglement properties of floquet-chern insulators, Phys. Rev. B 94, 205422 (2016). PhysRevResearch.4.043164 L. Zhou, Entanglement spectrum and entropy in floquet topological matter, Phys. Rev. Res. 4, 043164 (2022). PhysRevB.73.245115 S. Ryu and Y. Hatsugai, Entanglement entropy and the berry phase in the solid state, Phys. Rev. B 73, 245115 (2006). PhysRevB.27.6083 D. J. Thouless, Quantization of particle transport, Phys. Rev. B 27, 6083 (1983). PhysRevLett.111.026802 L. Wang, M. Troyer, and X. Dai, Topological charge pumping in a one-dimensional optical lattice, Phys. Rev. Lett. 111, 026802 (2013). PhysRevA.90.063638 F. Mei, J.-B. You, D.-W. Zhang, X. C. Yang, R. Fazio, S.-L. Zhu, and L. C. Kwek, Topological insulator and particle pumping in a one-dimensional shaken optical lattice, Phys. Rev. A 90, 063638 (2014). Nakajima2016 S. Nakajima, T. Tomita, S. Taie, T. Ichinose, H. Ozawa, L. Wang, M. Troyer, and Y. Takahashi, Topological thouless pumping of ultracold fermions, Nat. Phys. 12, 296 (2016). Lohse2016 M. Lohse, C. Schweizer, O. Zilberberg, M. Aidelsburger, and I. Bloch, A thouless quantum pump with ultracold bosonic atoms in an optical superlattice, Nat. Phys. 12, 350 (2016). Brion_2007 E. Brion, L. H. Pedersen, and K. Mølmer, Adiabatic elimination in a lambda system, J. Phys. A 40, 1033 (2007). PhysRevA.92.043621 M. Weinberg, C. Ölschläger, C. Sträter, S. Prelle, A. Eckardt, K. Sengstock, and J. Simonet, Multiphoton interband excitations of quantum gases in driven optical lattices, Phys. Rev. A 92, 043621 (2015). PhysRevA.68.013820 S. Rahav, I. Gilary, and S. Fishman, Effective hamiltonians for periodically driven systems, Phys. Rev. A 68, 013820 (2003). Eckardt_2015 A. Eckardt and E. Anisimovas, High-frequency approximation for periodically driven quantum systems from a floquet-space perspective, New J. Phys. 17, 093039 (2015). footnote1 We ignore the terms with the second order Bessel function J_2(θ_0) and the higher order terms of λ_0 and F_0^2. Holthaus_2016 M. Holthaus, Floquet engineering with quasienergy bands of periodically driven optical lattices, J. Phys. B 49, 013001 (2015). PhysRevLett.62.2747 J. Zak, Berry’s phase for energy bands in solids, Phys. Rev. Lett. 62, 2747 (1989). RevModPhys.82.3045 M. Z. Hasan and C. L. Kane, Colloquium: Topological insulators, Rev. Mod. Phys. 82, 3045 (2010). footnote2 Different from the case when δ'=0, where the gap closes at q=± 0.5π, in this instance, the gap closing occurs at q=± 0.66π due to the presence of the energy shift F'_-^2 which results in δ'≠0. doi:10.1126/science.aaa8736 M. Mancini, G. Pagano, G. Cappellini, L. Livi, M. Rider, J. Catani, C. Sias, P. Zoller, M. Inguscio, M. Dalmonte, and L. Fallani, Observation of chiral edge states with neutral fermions in synthetic hall ribbons, Science 349, 1510 (2015). doi:10.1126/science.aaa8515 B. K. Stuhl, H.-I. Lu, L. M. Aycock, D. Genkina, and I. B. Spielman, Visualizing edge states with an atomic bose gas in the quantum hall regime, Science 349, 1514 (2015). PhysRevA.95.023615 V. Novičenko, E. Anisimovas, and G. Juzeliūnas, Floquet analysis of a quantum system with modulated periodic driving, Phys. Rev. A 95, 023615 (2017). WEINBERG20171 P. Weinberg, M. Bukov, L. D’Alessio, A. Polkovnikov, S. Vajna, and M. Kolodrubetz, Adiabatic perturbation theory and geometry of periodically-driven systems, Phys. Rep. 688, 1 (2017). PhysRevB.96.035139 N. Sun and L.-K. Lim, Quantum charge pumps with topological phases in a creutz ladder, Phys. Rev. B 96, 035139 (2017). RevModPhys.82.1959 D. Xiao, M.-C. Chang, and Q. Niu, Berry phase effects on electronic properties, Rev. Mod. Phys. 82, 1959 (2010). Raffaele_2000 R. Resta, Manifestations of berry’s phase in molecules and condensed matter, J. Phys. Condens. Matter 12, R107 (2000). PhysRevLett.118.156401 J. Ahn and B.-J. Yang, Unconventional topological phase transition in two-dimensional systems with space-time inversion symmetry, Phys. Rev. Lett. 118, 156401 (2017). Ahn_2019 J. Ahn, S. Park, D. Kim, Y. Kim, and B.-J. Yang, Stiefel–whitney classes and topological phases in band theory, Chin. Phys. B 28, 117101 (2019). footnote3 In <cit.>, PT symmetry is called space-time inversion symmetry. RevModPhys.88.035005 C.-K. Chiu, J. C. Y. Teo, A. P. Schnyder, and S. Ryu, Classification of topological quantum matter with symmetries, Rev. Mod. Phys. 88, 035005 (2016). PhysRevB.96.155118 R. Roy and F. Harper, Periodic table for floquet topological insulators, Phys. Rev. B 96, 155118 (2017). Burkov2016 A. A. Burkov, Topological semimetals, Nat. Mater. 15, 1145 (2016). Sun2012 K. Sun, W. V. Liu, A. Hemmerich, and S. Das Sarma, Topological semimetal in a fermionic optical lattice, Nat. Phys. 8, 67 (2012).  Weidinger2017 S. A. Weidinger and M. Knap, Floquet prethermalization and regimes of heating in a periodically driven, interacting quantum system, Sci. Rep. 7, 45382 (2017). Rudner2020 M. S. Rudner and N. H. Lindner, Band structure engineering and non-equilibrium dynamics in floquet topological insulators, Nat. Rev. Phys. 2, 229 (2020).
http://arxiv.org/abs/2405.09410v1
20240515150534
Advection of the image point in probabilistically-reconstructed phase spaces
[ "Igor Shevchenko" ]
physics.flu-dyn
[ "physics.flu-dyn", "math-ph", "math.MP" ]
Highly Tunable Ru-dimer Molecular Orbital State in 6H-perovskite Ba_3MRu_2O_9 J.P. Clancy May 20, 2024 ============================================================================= The importance of data-driven modelling in science and engineering is hard to underestimate. However, its applicability can be significantly limited due to the problem of data-deficiency. This problem can lead to inaccurate, low-fidelity results, or simply to inapplicability of the data-driven approach. The proposed study challenges the data-deficiency problem by offering to sample from the joint probability distribution of reference data to enrich the original reference data with new data from the reference data distribution. The method proposed in this work allows to harvest extra reference data to enable data-driven methods to work with meager datasets on which they would be inapplicable otherwise. § INTRODUCTION The data-driven paradigm is gradually gaining momentum in computational fluid dynamics by offering an alternative to the physics-driven approach in situations when there is no physics-based model for the studied phenomenon or it is too computationally-expansive to acquire data from the model (e.g. <cit.> and references therein). Being a powerful tool providing a wide spectrum of methods (e.g. <cit.>) ranging from purely-data to physics-data driven, also know as hybrids, the data-driven approach typically suffers from the lack of reference data that can significantly compromise its fidelity or make it inapplicable. In order to help this problem, we offer a probabilistic reconstruction method which augments the hyper-parameterisation (HP) approach (e.g., <cit.>) with some ideas underlying the probabilistic-evolutionary approach <cit.>. More specifically, we propose to use Advection of the image point (a purely data-driven HP method working in phase space, also called state space) together with data sampled from the joint probability distribution of reference data so as to probabilistically reconstruct the reference phase space for the HP method and make it operable for short reference data records otherwise falling out of its range of applicability. Although, we consider the use of probabilistically-reconstructed phase spaces within the HP framework, its application is not bound to HP. For example, probabilistically-reconstructed reference datasets can be used within the context of other data-driven methods, as well as for training artificial neural networks, and for reconstruction of missing parts in observational data from different sources (drifters, weather stations, satellites, etc.) § THE PROBABILISTIC RECONSTRUCTION METHOD: SCHEMATIC AND BASICS The idea underlying the probabilistic reconstruction method is to calculate the joint probability distribution (JPD) for a given reference dataset and then sample reference data from this JPD instead of using the reference dataset itself. Hence, the JPD becomes an extra layer between the reference data and a data-driven method thus providing an extra source of reference data from the reference data distribution. It would be instructive to first look into the schematic of the proposed method (Fig. <ref>) for better understanding of its specifics and then consider the method in more detail. (a) Reference data acquisition. The first step is to provide the method with reference data. Depending on the problem to solve, it can be numerical simulations, observations, or both. In our case it is sea surface temperature (SST) computed with a global 1/12^∘ resolution ocean model NEMO and then interpolated onto a 1/4^∘ grid (Fig. <ref>a). If observations are available then they can also be included in the reference dataset by simply adding them to the reference data record. In order to be consistent in space, observations must be on the same grid as the NEMO data; consistency in time is not required, as the proposed method does not use it. (b) Reference phase space calculation. The second step is to get the reference phase space for the HP method called "Advection of the image point" <cit.>; the reference phase space is shown as an orange blob in Fig. <ref>b. The latter needs locations (which is SST itself) and directional vectors (SST tendencies), which are calculated from the reference data; different schemes can be used, we use the central finite difference in time. The tendencies are shown as black vectors in Fig. <ref>b, and the points they are attached to represent SST. The white spots (which we call voids) are regions of the phase space which lack data (usually, due to short reference records or unreliable observations). Typically, these voids is a reason for why data-driven methods cease to work on meager phase spaces. (c) Joint Probability Distribution calculation. The third step is to calculate the joint probability distribution (JPD) from the reference data in phase space; it is shown as a blue surface in Fig. <ref>c for illustration purposes; in high-dimensional phase spaces (like the one used for SST computed with NEMO) the JPD is a hyper-surface. It is important to remark that we calculate the JPD only for SST, while SST tendencies are computed in a different way explained below. The JPD for the reference SST is calculated globally with the histogram method, i.e. the whole reference dataset contributes to the histogram calculation. As the proposed method is developed to work in high-dimensional phase spaces, computing a multidimensional JPD and keeping it in memory for further sampling is an unaffordable option. Instead, we calculate a coordinate-wise probability density function (PDF). It gives us access to all necessary information for sampling at virtually no cost; we refer the reader to <cit.> for a detail discussion on the JPD calculation. The only difference in the JPD calculation in this study is that we compute the JPD globally (i.e. for all reference data), while in <cit.> it is computed locally in a given neighborhood. (d) Sampling from JPD and calculation of probabilistically-reconstructed phase space. In the fourth step we sample SST from the JPD and then calculate SST tendencies by averaging reference SST tendencies over the neighborhood of sampled SST (Fig. <ref>). The localized calculation of tendencies allows to decrease the dimensionality of the JPD two times (as the JPD dimension for both SST and its tendencies is two times larger compared to the JPD only for SST) and thus make its calculation more efficient without compromising on accuracy. It is worth mentioning that tendencies can also be computed in a probabilistic way from a local JPD which is constructed based on the information from the neighborhood of the new tendency (red contour in Fig. <ref>), but this method is slower than the one used in this study. In order to sample from the JPD we use the inverse transform sampling method <cit.>. We have also tried the rejection sampling but did not gain that much of a difference in its favour; detailed explanations on sampling from the JPD are given in <cit.>. Sampling from the JPD allows to probabilistically reconstruct the originally sparsely-populated reference phase space by filling it with new locations and directional vectors (tendencies) needed to advect the image point (Fig. <ref>d). Note that the probabilistically-reconstructed reference data can be used not only in the HP method, but in other data-driven methods which suffer lack of data. It is important to note that SST and its tendencies in the probabilistically-reconstructed phase space are not ordered in time. This is the moment when we need Advection of the image point to compute a trajectory (solution), i.e to order those points in time. (e) Advection of the image point in probabilistically-reconstructed phase space. In the fifth step, the probabilistically-reconstructed phase space is used by Advection of the image point to calculate the trajectory (red line in Fig. <ref>e). For the reader's convenience, we briefly describe how Advection of the image point works. This method falls into the category of data-driven methods from the hyper-parametersation (HP) class which takes advantage of working in phase spaces, as opposed to the conventional methods operating in the physical space. The method has been tested on both idealized and comprehensive ocean models (two-layer quasi-geostrophic model in a channel and MITgcm in the North Atlantic configuration) and showed significant improvements of the HP solution toward the reference one <cit.>. The HP approach currently umbrellas five different methods (ranging from purely data-driven to hybrids, which combine the data- and physics-driven paradigms). Another striking feature of the HP approach is that its measure of goodness is how close the HP solution is to the reference phase space. To put it another way, the HP approach matches phase spaces (the reference phase space and the phase space where the HP solution evolves), whereas conventional methods match individual trajectories. This measure allows the HP solution to evolve in the neighborhood of the reference phase space, and therefore reproduce the flow dynamics which is very similar to the reference one. The idea behind Advection of the image point (which we refer to as the HP method in what follows) is to use reference data, say 𝐱∈ℝ^n, and the following differential equation d𝐲/dt=1/M∑_j=𝒰_J𝐅(𝐱_j)|_𝒰(𝐲(t))+ η(1/N∑_i=𝒰_I𝐱_i|_𝒰(𝐲(t))-𝐲(t)), 𝐲(t_0)=𝐲_0, to describe the evolution of the image point, 𝐲∈ℝ^n, in the phase space of reference data. In the context of this work, 𝐱(t) is the probabilistically-reconstructed SST, and 𝐲(t) is the HP solution computed in the probabilistically-reconstructed phase space, and η is the nudging strength. The neighbourhood of 𝐲(t) is denoted as 𝒰(𝐲(t)), 𝒰_I and 𝒰_J are the sets of timesteps indexing the discrete reference solution 𝐱 in 𝒰(𝐲(t)). Although the method allows to use different sets of reference tendencies 𝐅(𝐱_j) and points 𝐱_i, we set M=N and 𝒰_I=𝒰_J in this study. In a nutshell, the actual dynamics is given by the observed reference tendency 1/M∑_j=𝒰_J𝐅(𝐱_j)|_𝒰(𝐲(t)) the imperfect reconstruction of which from the reference data is compensated by nudging towards the observed reference state 1/N∑_i=𝒰_I𝐱_i|_𝒰(𝐲(t)) in the probabilistically-reconstructed phase space. Note that the neighbourhood is computed as the average over M (and N for the nudging term) nearest, in l_2 norm, to the solution 𝐲(t) points. The choice of the norm and the way the neighborhood 𝒰(𝐲(t)) is calculated is not limited to those used in this study, and can vary depending on what is needed. Our choice is, probably, the simplest, but it serves well the purpose of this work. We dub the set of parameters {M,N,η} hyper-parameters. The hyper-parameters can be set based on the chosen metric and available data, we will return to their choice later. The interested reader is referred to <cit.> for a more detailed discussion of the method. The characteristic feature of the HP method is that it looks at the reference dataset as being a cloud of points which are not ordered in time. This is why we write 𝐱_i and 𝐅(𝐱_j) instead of 𝐱(t) and 𝐅(𝐱(t)) in equation (<ref>). At every time step, the HP method takes N nearest points 𝐱_i (and M tendencies 𝐅(𝐱_j)) to the solution 𝐲(t) points (Fig. <ref>). Thus, the HP method never runs out of reference data, as a set of nearest points to the HP solution 𝐲(t) can always be found. The main advantage of using the HP method in probabilistically-reconstructed phase spaces is that the data for the method is sampled from the same distribution as the reference data itself. Calibration of hyper-parameters. The HP method has hyper-parameters {M,N,η} which should be properly set to address the problem at hand. In order to calibrate these hyper-parameters we define a measure of discrepancy of two clouds of points, say A and B, as follows Di(A,B):=A-{min d(a_i,B)}_1≤ i≤ |A|_2/A_2 , where d is the Euclidean distance between the i-th point in A and a point in B; |·| is the number of points in the cloud (it is assumed that |A|=|B|). The cloud A represents the reference data, while the cloud B is the HP solution computed in the probabilistically-reconstructed phase space; the clouds are not supposed to have any order defined on them. It is important to remark that the min in (<ref>) is excluding, i.e. once the minimum between a pair (a_i,b_j) has been found, the point b_j is excluded from B (to avoid multiple counting) and is then used in comparison with element a_i. In other words, min does not return the minimum distance between a_i and B per se, it returns the element b_j that delivers this minimum distance. Also note that the choice of this measure of discrepancy reflects the measure of goodness used for the HP method (namely, the proximity of the phase space (where HP solution evolves) to the reference phase space, or more accurately to the probabilistically-reconstructed reference phase space). The calibration procedure consists of minimizing the discrepancy on the space of hyper-parameters; the smaller the discrepancy is, the closer the clouds are to each other. We would like to remind the reader again that the HP method tries to match phase spaces (not individual trajectories), and this is why the calibration of hyper-parameters makes itself useful. (f) Evolution in physical space. The fifth step is to get the modelled solution back in the physical space (Fig. <ref>f). In this study, the solution computed with the HP method is reshaped from a vector (used in the phase space) into a matrix, which is used to present fields in the physical space. § THE PROBABILISTIC RECONSTRUCTION METHOD IN ACTION In this section we apply the proposed method to Chua's circuit to demonstrate how it works in a low-dimensional phase space and then consider its application in the context of the global NEMO model. §.§ Chua's circuit As a minimum working example, we consider Chua's circuit and demonstrate in more detail how the method works. Chua's circuit <cit.> is given by the following system of equations: d𝐱/dt=𝐅(𝐱(t)), 𝐅:= [ α(y-ax^3-cx); x-y+z; -β y - γ z; ], with 𝐱(t)=(x(t),y(t),z(t)), and α=10, β=15, γ=0.01, a=0.1, c=-0.2. As an initial condition, we take 𝐱(t_0)=(1,0,0) so that the solution is close to the attractor. Reference data acquisition. Within the context of Chua's circuit, the reference data for the HP method is the solution 𝐱(t) and its tendencies 𝐅(𝐱(t)). In order to get the reference data we integrate the Chua system (<ref>) over time t∈[0,100] and save the solution. Then, the tendencies are computed by differentiating 𝐱(t); we use the central finite difference in time (Fig. <ref>). Instead of differentiating the solution, one can use the right hand side, but we recommend against doing that, as the right hand side is usually unavailable for numerical simulations and observations. Note that instead of or in combination with the numerical solution, one can also use observations. In this case, one should form a new dataset 𝐱={𝐱(t),𝐱_o(t)}, which includes both the solution 𝐱(t) and observations 𝐱_o(t). Calculation of JPD and sampling from JPD. The JPD of the reference solution 𝐱(t) and reference tendencies 𝐅(𝐱(t)) are calculated as described above. In order to make sure that sampling from the JPD is accurate enough for further use, we compare the reference PDF (computed from the reference dataset) and the reconstructed PDF (computed from the data sampled from the JPD). As seen in Fig. <ref>, the reconstructed PDF (blue) is an accurate approximation to the reference PDF (black); the reconstructed PDF can be calculated more accurately, but it is not required for the purpose of this study. The next step is to sample from the JPD to probabilistically reconstruct the reference phase space for the HP method. We use the same amount of reference data as before, but harvest twice as much from the JPD (Figs. <ref>,<ref>). The results clearly demonstrate the high quality of the probabilistically-reconstructed data, and that it covers a larger phase space compared with the reference one. §.§ Calibration of hyper-parameters The calibration procedure consists of minimizing Di(A,B) in the space of hyper-parameters M, N, and η. One can opt for engaging a full-blown optimization to find the optimum solution. However, for the purpose of this study, we set M=N=5 and search for η∈[0,1] that minimizes Di(A,B). We have found that η=0.235 gives a solution with low discrepancy. In order to demonstrate that the proposed method can gain from using the probabilistically-reconstructed phase space, we compare it with the HP method run on the reference dataset (Fig. <ref>). As seen in Fig. <ref>b, the HP solution computed in the probabilistically-reconstructed phase space is very-well developed on the attractor compared with the HP solution computed on the reference dataset (Fig. <ref>c), which remains bounded to a very narrow band because of lack of data. All these results give an extra layer of confidence that the proposed method has strong potential for the use in more sophisticated models which we consider in the next section. §.§ NEMO model  The output of the global 1/4^∘ (ORCA025-N4001) and 1/12^∘ (ORCA0083-N006) resolution NEMO model is used in this study. The reference solution is the 5-day mean sea surface temperature (SST) and surface relative vorticity (SRV) in the North Atlantic region [-83^∘W,-20^∘W]×[27^∘N,55^∘N] over the period of 1979-1993; the period of 1994-2007 is used for validation to check how the HP method performs. The reference solution is interpolated from the 1/12^∘-grid onto the 1/4^∘ grid. The lower-resolution 1/4^∘-solution from ORCA025-N4001 simulation (we call it the modelled solution) is used for comparison with the HP solution. In order to calibrate the hyper-parameters we take M=N=5 and minimize Di(A,B) with respect to η. We have found that η=0.01 gives a low discrepancy. After setting up the hyper-parameters, we compute a probabilistically-reconstructed phase space by sampling twice as much compared to the reference record length (Fig. <ref>). The dimensionality of the probabilistically-reconstructed space is n≈30000. One can run the HP method in the high-dimensional phase space, but in this work we further reduce it down to m=200 by using the EOF-PC decomposition <cit.>. It results in a two-order of magnitude reduction compared with the original dimension; note that the 200 leading EOFs capture 99% of the total SST variance and 60% of the total SRV variance. We compute the EOF-PC decomposition of SST and SRV separately. However, it can be computed for both if one needs to model these fields together. The space of PCs (let it be denoted as 𝐱) can be regarded as a reduced version of the probabilistically-reconstructed phase space. Given this reduced space, one can reconstruct a dynamical system governing the reduced dynamics of observed reference data, as we did in <cit.>. In this study, we use a different idea: the dynamics of PCs is modelled in the reduced space with the HP method directly and then the HP solution is projected back into the full-dimensional probabilistically-reconstructed phase space. In the reduced phase space, equation (<ref>) reads as d𝐳/dt=1/M∑_j=𝒰_J𝐅(𝐱_j)|_𝒰(𝐳(t))+ η(1/N∑_i=𝒰_I𝐱_i|_𝒰(𝐳(t))-𝐳(t)), 𝐳(t_0)=𝐳_0. The only difference of (<ref>) compared with equation (<ref>) is that 𝐱 does not represent the reference solution (it would be SST or SRV in this case) but the PCs. Having solved equation (<ref>), we approximate the HP solution, 𝐲(t), in the probabilistically-reconstructed phase space by using the leading EOF-PC pairs as follows: 𝐲(t)≈∑^m_i=1z_i(t)𝐄_i , with 𝐄_i and z_i being the i-th EOF and PC (computed with the HP method in the reduced space), respectively. Thus, solution 𝐲(t) is the SST (or SRV) computed in the full-dimensional probabilistically-reconstructed phase space. We report the results in Fig. <ref> which clearly shows that the HP solution (Fig. <ref>b) much better represents the Gulf Stream than the modelled solution (Fig. <ref>c). These results are also confirmed in Fig. <ref>. Namely, the difference between the time mean of the reference solution and HP solution is much smaller (Fig. <ref>a) than that of the modelled solution (Fig. <ref>b). The root mean square error for the HP solution (Fig. <ref>c) is also much lower then the one of the modelled solution (Fig. <ref>d). More insights about how the HP solution differs from the modelled solution can be seen in the SRV fields (Fig. <ref>). The Gulf Stream computed with the HP method (Fig. <ref>b) is much stronger than the one computed with the 1/4^∘ model (Fig. <ref>c). Moreover, the HP solution is teemed with vortices (like the reference one), while the vortex dynamics is significantly inhibited in the modelled solution as it is also reflected by the error plots in Fig. <ref>. Doubling the number of PCs (m=400) results in a more pronounced Gulf Stream dynamics and larger population of vortices (Fig. <ref>). However it is contribution to the time mean flow and the root mean square error is small (not shown). § CONCLUSIONS In this study we have proposed to use the joint probability distribution (JPD) to probabilistically reconstruct the phase space which can be used when the original reference phase space (computed from numerical solutions, observations, or both) is too sparse to allow the use of data-driven methods. The probabilistically-reconstructed phase space is calculated by sampling from the JPD, thus providing an extra source of data from the reference distribution and ensuring that both the reference phase space and the reconstructed one share the same reference distribution. We have shown how the probabilistic reconstruction works on the example of Chua's system and then applied it within the context of the global ocean model NEMO. We have also found that the HP method can be used directly in reduced phase spaces thus allowing several-orders-of-magnitude acceleration compared to the 1/4^∘-resolution NEMO run. We reduced the dimensionality of the reference phase space by two orders of magnitude by applying the EOF-PC decomposition. Our results show that the use of the HP method in the probabilistically-reconstructed (and reduced) phase space gives more accurate results and more realistic flow dynamics compared with the 1/4^∘-NEMO simulation. It is also worth mentioning that the two order of magnitude acceleration can be translated into using larger ensemble of HP solutions for probabilistic predictions. We believe that the proposed method demonstrates the strong potential for the use in operational ocean and ocean-atmospheric models. The next step in this direction is to assess prediction skills and achievable forecast range of the proposed method for high-resolution forecasting systems and near-term climate predictions. The author thanks the Natural Environment Research Council for the support of this work through the projects CLASS and ATLANTIS (P11742), as well as Andrew Coward and Chris Wilson for the production of and help with the NEMO datasets, respectively. The output of the global 1/4^∘ (ORCA025-N4001) and 1/12^∘ (ORCA0083-N006) resolution NEMO model used in this study is available on JASMIN ( https://jasmin.ac.uk/users/access/ ) from these locations: /gws/nopw/j04/nemo_vol1/ORCA025-N401 and /gws/nopw/j04/nemo_vol1/ORCA0083-N006 . ametsocV6
http://arxiv.org/abs/2405.09723v1
20240515230329
Gravitational Wave-Induced Freeze-In of Fermionic Dark Matter
[ "Azadeh Maleknejad", "Joachim Kopp" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "gr-qc", "hep-th" ]
[luatex=true] [1]#1
http://arxiv.org/abs/2405.10277v1
20240516173437
Hilbert Functions and Low-Degree Randomness Extractors
[ "Alexander Golovnev", "Zeyu Guo", "Pooya Hatami", "Satyajeet Nagargoje", "Chao Yan" ]
cs.CC
[ "cs.CC" ]
=10000 [patterns] mypic [scale=1.3] #1#2#3 ////i̧n #3 ıin ,..., ȷin , ..., red [draw=none, fill=red] (ı,ȷ) rectangle (ı+1,ȷ+1); [draw=none, fill=none,pattern=crosshatch] (ı,ȷ) rectangle (ı+1,ȷ+1); (0,0) grid (#1,#2); Hilbert Functions and Low-Degree Randomness Extractors Alexander GolovnevGeorgetown University. Email: . Supported by the NSF CAREER award (grant CCF-2338730). Zeyu GuoThe Ohio State University. Email: . Pooya HatamiThe Ohio State University. Email: . Supported by NSF grant CCF-1947546. Satyajeet NagargojeGeorgetown University. Email: . Supported by the NSF CAREER award (grant CCF-2338730). Chao YanGeorgetown University. Email: . ========================================================================================================================================================================================================================================================================================================================================================================================================= For S⊆^n, consider the linear space of restrictions of degree-d polynomials to S. The Hilbert function of S, denoted _S(d,), is the dimension of this space. We obtain a tight lower bound on the smallest value of the Hilbert function of subsets S of arbitrary finite grids in ^n with a fixed size |S|. We achieve this by proving that this value coincides with a combinatorial quantity, namely the smallest number of low Hamming weight points in a down-closed set of size |S|. Understanding the smallest values of Hilbert functions is closely related to the study of degree-d closure of sets, a notion introduced by Nie and Wang (Journal of Combinatorial Theory, Series A, 2015). We use bounds on the Hilbert function to obtain a tight bound on the size of degree-d closures of subsets of _q^n, which answers a question posed by Doron, Ta-Shma, and Tell (Computational Complexity, 2022). We use the bounds on the Hilbert function and degree-d closure of sets to prove that a random low-degree polynomial is an extractor for samplable randomness sources. Most notably, we prove the existence of low-degree extractors and dispersers for sources generated by constant-degree polynomials and polynomial-size circuits. Until recently, even the existence of arbitrary deterministic extractors for such sources was not known. roman empty arabic § INTRODUCTION Hilbert Functions. Low-degree polynomials are fundamental objects in theoretical computer science, and their properties are extensively studied due to their important role in areas such as error correcting codes and circuit lower bounds. Let d≥ 0 be an integer, be a field, and S⊆^n be a set. Each degree-d n-variate polynomial p over can be naturally viewed as a map p:^n→, and hence also defines a map p|_S: S→. Considering the linear space of all such maps in ^S, which is a subspace of the space of all maps from S to , allows one to tap into a wide array of algebraic techniques in order to prove useful facts about the set S. This approach was for example utilized in complexity theory famously in the work of Smolensky <cit.>, where proving bounds on the dimension of the aforementioned subspace was used to obtain lower-bounds for ^0[⊕] circuits computing the indicator function of the set S, for various S⊆{0,1}^n. The dimension of the space consisting of p|_S for all degree-d polynomials p is indeed a well-studied and classical concept in algebraic geometry known as the (affine) Hilbert function of S, denoted by _S(d, ). Hilbert functions encode important geometric and algebraic information, such as the dimension, degree, and regularity of varieties, in a more general context. Hilbert functions have previously been studied in complexity theory due to their applications in circuit lower bounds, in particular for ^0[⊕] circuits, that were established by Smolensky <cit.> and Razborov <cit.>. Such applications require finding sets S⊆{0,1}^n where the Hilbert function takes a very large value. However, it is also interesting to prove general lower bounds or find lower-bounding methods for arbitrary sets S. An example of such a result is the work of Moran and Rashtchian <cit.>, who showed upper and lower bounds on _S(d,) for S⊆{0,1}^n⊆^n via various concepts in VC theory. <cit.> treated the Hilbert function as a complexity measure of the set S and compared it to measures that arise naturally in learning theory, including “shattering” and “strong shattering” values. Suppose r>0 is an integer. It is natural to wonder, what the extreme values of _S(d,) are, among all sets S of size |S|=r. It is not hard to show that the maximum value is equal to min( r, _^n(d,)) when S⊆^n. For example, the maximum value of the Hilbert function of a set S⊆_2^n of size r is min(r, n≤ d). On the other hand, finding the true smallest value of _S(d,) is a natural and intriguing question even when S is restricted to subsets of some finite and structured set in ^n. Let 0≤ d ≤ n be integers, be a field, and A=A_1×⋯× A_n⊆^n where A_i⊆ are finite sets. For any r≤ |A|, what is the smallest value of _S(d, ) among all subsets S⊆ A^n of cardinality |S|=r? This question has been answered in the case of =_2 and A=_2^n by Keevash and Sudakov <cit.> and Ben-Eliezer, Hod, and Lovett <cit.>, and later generalized to =_p and A=_p^n by Beame, Oveis Gharan, and Yan <cit.>. For simplicity, let r=p^k for some k≥ 0. <cit.> proved that the smallest value of _S(d,_p) with |S|=r is equal to the number of degree-≤ d monomials on k variables, for example when p=2, this is equal to simply k≤ d=log_2 r≤ d. We prove a more general result that answers <ref> for arbitrary finite grids A⊆^n in arbitrary fields . We show that the smallest values of Hilbert functions are exactly determined by an extremal combinatorial question about the number of low-Hamming-weight elements in down-closed sets, which we solve by building on the work of Beelen and Datta <cit.>. The prior works discussed above were motivated by applications in bounding the list-size of the Reed-Muller codes and obtaining certain extensions of Frankl–Ray-Chaudhuri–Wilson theorems on cross-intersecting sets. In contrast, in this paper, we are interested in <ref> due to its applications in pseudorandomness, particularly in randomness extraction. Understanding the smallest values of Hilbert functions is closely related to the study of degree-d closure of sets, a notion introduced by Nie and Wang <cit.>. The degree-d closure of a set T⊆^n is defined as _d(T){a ∈^n | for every degree-d polynomial f, f |_T≡ 0 ⇒ f(a)=0} . Equivalently, _d(T) is the set of all points a∈^n such that the values f(a) of a degree-d polynomial f are determined by f|_T. The existence of a small set with a large degree-d closure has application to hitting-set generators for polynomials <cit.>. As an application of our answer to <ref>, we obtain an upper bound on the size of _d(T) in terms of |T|. Our bound in fact yields an optimal way of creating a small set with a large degree-d closure. Futhermore, <ref> has direct implications to the theory of randomness extractors, which we discuss next. Randomness Extractors. The theory of randomness extractors is an active research area that was initiated in <cit.> with the motivation of simulating randomized algorithms with access to “weak” randomness sources. The main objective of this theory is to design extractors that are capable of purifying imperfect randomness sources into high-quality random bits or bit sequences. Extractors and related objects such as dispersers, samplers, and condensers have since found numerous applications in constructing other pseudorandom objects such as pseudorandom generators <cit.> and expander graphs <cit.>, as well as applications in other areas of theoretical computer science and mathematics including cryptography <cit.>, combinatorics <cit.>, hardness of approximation <cit.>, error correcting codes <cit.>, and metric embeddings <cit.>. A deterministic extractor for a family 𝒳 of distributions over {0,1}^n is a map f:{0,1}^n→{0,1}^m such that for any 𝐗∈𝒳, f(𝐗) is close to the uniform distribution in statistical distance. It is common to measure the amount of randomness in a random variable 𝐗 by its min-entropy, defined (𝐗) -log_2 max_x∈{0,1}^n[X=x]. It is easy to show that no deterministic extractor can extract from general n-bit randomness sources of min-entropy as high as n-1 <cit.>. As a result, researchers in the area have explored two directions. Much of the focus in the area has been given to the more powerful seeded extractors that have access to an additional short purely random seed. This article contributes to another line of work that has extensively investigated the extra assumptions on the randomness sources that allow for explicit deterministic extractors and dispersers to exist. A widely studied class of sources in this latter direction, introduced in <cit.> is “samplable sources”, where the sources of randomness are distributions sampled by applying a low-complexity map (e.g., a decision forest, local map, ^0 circuit, ^0 circuit, an affine or a low-degree map) to the uniform distribution. Unfortunately, constructing explicit extractors even for sources samplable by really low-complexity maps has been quite challenging, and for example all the known constructions of extractors for local sources require quite high min-entropy of Ω(√(n)) <cit.>. Due to the difficulty of constructing good explicit extractors and motivated by applications in complexity theory such as circuit lower bounds <cit.> and lower bounds for distribution-sampling <cit.>, researchers have considered the seemingly easier task of proving the existence of low-complexity extractors <cit.>. The state of affairs is much worse when it comes to randomness extraction from sources sampled by more powerful maps such as ^0[⊕] or low-degree _2-polynomial maps. In this case obtaining nontrivial explicit constructions and even non-explicit low-complexity extractors remains open. In fact, the same problems are open even in the case of dispersers. Here a map f:{0,1}^n →{0,1} is a disperser for 𝒳 if for every source 𝐗∈𝒳, the support of f(𝐗) is {0,1}. On the positive side, Chattopadhyay, Goodman, and Gurumukhani <cit.>, recently proved the existence of deterministic (not necessarily low-complexity) extractors for low-degree _2-polynomial sources with logarithmic min-entropy. §.§ Our Results on Hilbert Functions We obtain our answer to <ref> by first reducing it to a purely combinatorial problem. In particular, via an algebraic geometric argument, we prove the following theorem which states that the minimum value of Hilbert functions over subsets of a grid is exactly captured by a combinatorial quantity related to down-closed sets. (A set T⊆^n is said to be down-closed if T is closed under decreasing any coordinates of its elements.) Let be a field, and A_1,…, A_n⊆ be finite sets of size |A_i|=r_i. Define A=A_1×⋯× A_n. For every k≤ |A|, min_S⊆ A: |S|=k_S(d,)= min_ T⊆ F: |T|=k|T_≤ d| , where F=∏_i {0,…, r_i-1} and T_≤ d={x∈ T: ∑_i x_i ≤ d}. Let I be the ideal of [X_1,…,X_n] associated with a set S⊆ A, that is, the set of all polynomials vanishing on S. Classical results in algebraic geometry (such as Hilbert's Nullstellensatz) establish close connections between the structure of S and the structure of I, which allows us to focus on studying I. The proof of <ref> is based on the idea that the ideal I can be reasonably approximated by another ideal, the ideal of leading terms of I. This approximation preserves important information about I, and consequently, about S as well. In particular, when the ideal of leading terms of I is defined with respect to a specific total order of monomials compatible with the total degree, it can be shown that such an approximation preserves the value of the Hilbert function. One advantage of working with the ideal of leading terms is that it is a monomial ideal, that is, an ideal generated by monomials, whose relatively simple structure can be analyzed using combinatorial tools. We remark that the concept of transforming a general ideal into a monomial ideal is closely related to the theory of Gröbner bases, which serves as a basis of computational algebraic geometry. For a detailed discussion, see, e.g., <cit.>. This concept is also used in Smolensky’s algebraic method for proving circuit lower bounds <cit.>. <ref> allows us to reduce the problem of determining the smallest value of Hilbert function of a set of size k to understanding the smallest number of low-Hamming-weight points in down-closed sets of the same size. We then solve this combinatorial problem by proving that the minimum is obtained by the down-closed set M_F(k) which is defined as the set of k lexicographically first elements of F. Let 1≤ r_1≤⋯≤ r_n be integers and let F=∏_i=1^n {0,…, r_i-1}. Then min_ T⊆ F|T_≤ d| = |M_F(k)_≤ d| . In the case of r_1=⋯=r_n=2, we prove the above theorem via an elementary combinatorial argument, that via a series of operations turns any set of k elements into M_F(k) without increasing the number of elements of Hamming weight ≤ d. We prove the general case by building on a recent result of Beelen and Datta <cit.>. This result generalizes the work of Wei <cit.> and Heijnen–Pellikaan <cit.> in studying the generalized Hamming weights of certain linear codes. We record the following corollary of our results specialized to finite fields which generalizes the bounds due to <cit.>. For every prime power q, and n,k,d∈ where k≤ q^n, we have min_S⊆_q^n: |S|=k_S(d,_q)= |M^n_q(k)_≤ d| . In particular, setting q=2, for every n,k,d∈ where k≤2^n, and every S⊆_2^n of size |S|=k, _S(d,_2)≥log(k)≤ d . §.§.§ Degree-d Closure of Sets Motivated by its applications to combinatorial geometry, the notion of degree-d closures of subsets of 𝔽_q^n was introduced in <cit.>. This concept has since found further applications and connections to complexity theory <cit.> and pseudorandomness <cit.>. Recall that the degree-d closure _d(T) of a set T⊆^n over a finite field is the set of all points a∈^n such that any degree-d polynomial vanishing on T also vanishes at a. Nie and Wang <cit.> proved the following result. Let n,d∈ and T⊆_q^n. Then |_d(T)| ≤q^n/__q^n(d, _q)· |T|. Building on our results on Hilbert functions, we obtain an improvement of <ref> by obtaining a tight upper bound on the size of degree-d closures of sets. Let n,d,m∈. Let T⊆_q^n be a set of size m. Then |_d(T)|≤max_0≤ k≤ q^n: |M^n_q(k)_≤ d|≤ m k = max_0≤ k≤ q^n: |M^n_q(k)_≤ d|=m k if m≤__q^n(d, _q), q^n otherwise. Moreover, this bound is tight in the sense that for any 0≤ m≤ q^n, there exists T⊆_q^n of size m for which (<ref>) holds with equality. In fact, the set T of size m that attains the bound in the above theorem can be constructed explicitly; see <ref> for details. For convenience, we state the following corollary which is used later in the paper. For n,d,δ∈, denote by N(n,d,δ) the number of monomials X_1^e_1⋯ X_n^e_n with e_1,…,e_n≤δ and e_1+…+e_n≤ d. Let n,d,ℓ∈. If T⊆_q^n is a set of size less than N(ℓ,d,q-1), then |_d(T)| < q^ℓ. In particular, if q=2 and T⊆_2^n is a set of size less than ℓ≤ d, then |_d(T)| < 2^ℓ. Observe that |M^n_q(q^ℓ)_≤ d|=N(ℓ,d,q-1). Then apply <ref>. Let us compare our bound with the bound of Nie and Wang in some specific settings. Suppose ℓ≤ n. Let T⊆_2^n be a set of size ℓ≤ d-1. Then by <ref>, we have the bound |_d(T)|≤ 2^ℓ-1. On the other hand, the bound of Nie and Wang (<ref>) gives |_d(T)|≤2^n/n≤ d· |T| , which is exponential in n, rather than in ℓ, at least when d≤(1/2-c) n for some constant c>0. Suppose ℓ≤ n and d<q. Let T⊆_q^n be a set of size N(ℓ,d,q-1)-1=ℓ+dd-1. Then by <ref>, we have the bound |_d(T)|≤ q^ℓ-1, which is exponential in ℓlog q. On the other hand, the bound of Nie and Wang (<ref>) gives |_d(T)|≤q^n/n+dd· |T| , which is exponential in nlog q, rather than in ℓlog q, at least when n+d≤ q^1-c for some constant c>0. In <cit.>, Doron, Ta-Shma, and Tell explicitly asked if there exists a small set T ⊆_q^n whose degree-d closure is very large. Our <ref> gives an upper bound on the size of the degree-d closure of T in terms of the size of T, which is tight in the sense that there exist sets T that exactly meet this bound for every cardinality of T. Moreover, such sets T can be constructed explicitly (see <ref>). Thus, we completely resolve the question posed by Doron, Ta-Shma, and Tell. §.§ Our Results on Randomness Extractors Continuing the line of work on low-complexity extractors, in this paper we investigate the power of low-degree polynomials in randomness extraction. For which families 𝒳 of sources does there exist a low-degree disperser? Similarly, for which families 𝒳 of sources does there exist a low-degree extractor? Let us first discuss the easier task of obtaining low-degree dispersers before moving on to our main application of low-degree extractors. For simplicity, we will focus on the most important case of extracting randomness over _2, but all our results easily generalize to _q. Non-explicit constructions of low-degree dispersers can be obtained via understanding the probability that a random low-degree polynomial is a disperser for a family 𝒳 of distributions over {0,1}^n which we identify with _2^n in the natural way. Our starting point is the observation that the notion of Hilbert functions can be used to exactly describe the probability that a random degree-d polynomial f:{0,1}^n →{0,1} is a disperser for a fixed source 𝐗∈𝒳. Indeed, this probability is exactly equal to 1-2^1-_S(d, _2), where S=support(𝐗). Thus, in particular, <ref> can be used to bound the probability that a random degree-d polynomial is not a disperser for a fixed source. §.§.§ Low-Degree Dispersers Let 𝒳 be a family of sources of min-entropy ≥ k. Observing that the support of any distribution 𝐗∈𝒳 is of size ≥ 2^k, one gets as an immediate corollary of <ref>, the existence of low-degree dispersers 𝒳 as long as |𝒳| is small. Let n,d,k≥1. Let 𝒳 be a family of distributions of min-entropy ≥ k. Then a random degree-d polynomial over _2 is a disperser for 𝒳 with probability at least 1-|𝒳|· 2^1-k≤ d . This theorem itself implies the existence of low-degree dispersers for several interesting families of samplable sources such as sources sampled by local maps, bounded-depth decision forests, and polynomial-sized bounded-fan-in circuits, to name a few. A map f:{0,1}^m→{0,1}^n is called ℓ-local if each of its output bits depends on at most ℓ input bits. A depth-ℓ decision forest is a map f where each output bit can be computed as a depth-ℓ decision tree. It is easy to obtain an upper bound exponential in (n) on the number of local or decision forest sources. Hence we get the following as a corollary of <ref>. Let 1≤ℓ≤ d≤ n be integers. There exists a degree-d disperser * for the family of ℓ-local sources on {0,1}^n with min-entropy k> d(2^ℓ n + 2ℓ n log n)^1/d. * for the family of depth-ℓ decision forest sources on {0,1}^n with min-entropy k> d((ℓ+log n) 2^ℓ+1 n)^1/d. As mentioned above, since in addition to the min-entropy requirement, the only requirement in <ref> about the family 𝒳 is a bound on |𝒳|, it can be used to immediately obtain low-degree dispersers for various other families of sources as well. For example, since for any c, the number of Boolean circuits with ≤ n^c bounded fan-in gates is at most 2^O(n^c+1), one can also use <ref> to obtain a degree-O(c) disperser for such families of circuits. However, we will not do an exhaustive search for all such applications, and instead our main disperser applications will focus on two powerful families of sources, namely sources sampled by low-degree polynomials over _2 and [⊕] circuits which we define as the family of unbounded-depth polynomial-size Boolean circuits with AND, OR, XOR, NOT gates of unbounded fan-in, where the input gates are not counted towards the size. Note that low-degree polynomial maps f:{0,1}^m →{0,1}^n, even affine ones, can depend on the entire input for any m ≫ n and thus one cannot simply bound |𝒳| when 𝒳 is the family of sources sampled by low-degree polynomials. This property holds for [⊕] circuits, as we allow them to non-trivially depend on an arbitrary number of input gates (since the circuit gates have unbounded fan-in). Nevertheless, utilizing an “input-reduction” trick of <cit.> which applies to both the foregoing families of sources, it can be shown that for our disperser purposes we may assume the input of both families of sources to be of length O(n). This allows us to apply <ref> to obtain low-degree dispersers for both of these families. For every 1≤ℓ< d≤ n, there exists a degree-d disperser * for the family of degree-ℓ sources on {0,1}^n with min-entropy k≥(12^ℓ· d^d· n)^1/d-ℓ+1. * for the family of n^ℓ-size [⊕] circuit sources on {0,1}^n with min-entropy k≥ (30^2· d^d· n^2ℓ)^1/d-2+1. In particular, for every ℓ∈, there is a degree-(ℓ+2) disperser for degree-ℓ sources on {0,1}^n with min-entropy Ω(√(n)). We note that both of these source families are very powerful, and to the best of our knowledge, no nontrivial low-complexity dispersers for either of these families of sources was known prior to this work (except in the easier case of degree-1 sources which corresponds to affine sources for which explicit extractors for logarithmic entropy was recently proved <cit.>). Let us also point out that the two foregoing classes have incomparable power, and that it is straightforward to use our proof technique to conclude the same result for a class of sources that generalizes both and constant-degree polynomials. Indeed, the input-reduction and counting idea works for the “hybrid” class of polynomial-size circuits which extends by allowing additional unbounded fan-in gates computing arbitrary polynomials of fixed constant degree. However, for ease of exposition, we have chosen to present only the results for and low-degree sources separately. §.§.§ Low-Degree Extractors Next, we move on to another application concerning the existence of low-degree extractors for samplable sources. Can we prove the existence of low-degree extractors for all the families for which we proved the existence of low-degree dispersers? We prove this by showing an analogue of <ref> for extractors. Let 𝒳 be a family of distributions of min-entropy k≥5logn over {0,1}^n for large enough n. Then for every d≥ 6, a uniformly random degree-d polynomial is an -extractor for 𝒳 with probability at least 1-|𝒳|· e^3n-O(k^d/2)/n^2 for =(2d)^d· k^-d/4. A similar statement (see <ref>) holds for families of sources that are close to convex combinations of another small family of sources. Combined with the input-reduction trick, we obtain as a corollary, the existence of low-degree extractors for various families of sources, notably, lower-degree sources and circuits. For all ℓ,d≥1, and all large enough n, and k≥ 5log n. There exists a degree-d _2-polynomial that is an -extractor for the following families of sources over {0,1}^n for =(2d)^d· k^-d/4: * ℓ-local sources for k≥ (2^ℓ n^3logn)^2/d. * depth-ℓ decision forest sources for k≥ (2^ℓ n^3(logn+ℓ))^2/d. * degree-ℓ sources for k≥(3^ℓ n)^6/d-2ℓ. * n^ℓ-size circuit sources (with unbounded number of input gates) for k≥ 3n^4(ℓ+1)/d-4. In <ref>, we further extend our low-degree extractors to multi-output extractors that output Θ(k) bits. This is done by independently picking random degree-d polynomials p_1,…, p_t for some t=Θ(k), and analyzing the probability that each p_i is an extractor for the family of sources obtained by 𝒳 conditioned on the values of p_1,…, p_i-1. Let us now discuss our proof technique for <ref>. Recall that <ref> was a corollary to <ref> which showed that a random polynomial is with high probability non-constant on the support of any fixed high min-entropy distribution. A priori it is not clear how to use this bound on the Hilbert function to prove <ref>. Indeed, let us consider the simpler case of a fixed k-flat source 𝐗 over {0,1}^n, which is uniformly distributed over a set S⊆{0,1}^n with |S|=2^k. Note that a map p:{0,1}^n→{0,1} is an extractor for 𝐗 if it has small bias on S. Thus, for example, to prove the special case of <ref> for small families of k-flat sources, we would need to prove that a random degree-d polynomial is small-biased on S with high probability. However, <ref> only tells us that _S(d,_2)≥k≤ d, which is not enough to prove concentration bounds for the bias of a random degree-d polynomial on an arbitrary set S. We note that when S is highly structured, that is when it is an affine subspace, this problem is equivalent to questions about list-decoding size of Reed-Muller codes, and known results such as one by Kaufman, Lovett, and Porat <cit.> that show that the number of distinct -biased degree-d polynomials on a k-dimensional subspace S is at most (1/ϵ)^k^d-1 could be utilized. However, for our application we have to deal with arbitrary sets S. Uniform covering by sets of full Hilbert dimension. We say that a set T⊆{0,1}^n has “full Hilbert dimension” if _T(d, _2)=|T|. Note that when T has full Hilbert dimension, then the restriction of a random degree-d polynomial to T is uniformly distributed over {0,1}^T. In particular, if T is a sufficiently large set of full Hilbert dimension, then a random degree-d polynomial is small-biased on T with high probability. We use this observation to design our technique for bounding the probability that a random degree-d polynomial is small-biased on any fixed source 𝐗 of large min-entropy. For simplicity we describe the idea for flat sources. In this case, 𝐗 is uniformly distributed over a set S with |S|≥ 2^k. It is sufficient to prove the existence of an almost-uniform covering of S by large sets T_1,…, T_t of the same size with full Hilbert dimensions, where we call a covering almost-uniform if each element x∈ S belongs to roughly tm/|S| many sets, where we assume |T_i|=m. We obtain such a covering by analyzing the probability that a uniformly picked subset T_i⊆ S has full Hilbert dimension. Using our bound on the Hilbert function, <ref>, which allows us to bound the size of the “degree-d closure” of small sets, we prove that a random set T_i of size m, for some m=Θ(k)≤ d, has full Hilbert dimension with high probability. Similarly, we prove using the Bayes rule, that we may pick these good sets T_i's of full Hilbert dimension in a way that leads to an almost uniform covering. Since T_i's are of sufficiently large size Θ(k)≤ d and of full Hilbert dimension, we can use the Hoeffding inequality to bound the probability that a random degree-d polynomial is biased on a T_i to be exponentially small in Θ(k)^d, which is good enough for our applications to existence of low-degree extractors. We obtain the following result which can be used to prove <ref>. Let n,d,k≥1, and >0 be a real. Then for every distribution 𝐗 over {0,1}^n with (𝐗)≥ k, a uniformly random degree-d polynomial f is an -extractor for 𝐗, with probability at least 1-e^3n-^2ℓ≤ d/(Cn^2) where ℓ=k/2-log(32n/) and C=7·(32)^2. We find our technique of obtaining almost uniform coverings with sets of full Hilbert dimension to be powerful, and hope that it will find other applications beyond the ones explored here. §.§ Remarks Correlation bounds over arbitrary subsets. We note that our proof of <ref> (<ref>) can be modified to the following correlation bounds with any fixed function. Let n,d,k≥1, >0 be a real, and g:_2^n →_2 be a fixed function. Then for every distribution 𝐗 over {0,1}^n with (𝐗)≥ k, for a uniformly random degree-d polynomial f we have [f(𝐗)= g(𝐗)] = 1/2±, with probability at least 1-e^3n-^2ℓ≤ d/(Cn^2) where ℓ=k/2-log(32n/) and C=7·(32)^2. This generalization is quite straightforward, as once we obtain a uniform covering by sets of maximum Hilbert dimension, then Hoeffding bounds can be used to bound the correlation of a random polynomial with the fixed function restricted to the sets belonging to the cover. This can then be used to bound the over-all correlation with the fixed function in a similar way to the proof of <ref>. Punctured Reed-Muller codes. The special case of <ref> when 𝐗 is a flat source can be interpreted as a bound on the list-decoding size of Reed-Muller codes when “punctured” on a large set S⊆_2^n. Recall that the binary Reed-Muller code RM[d,n] consists of codewords in _2^n that correspond to the evaluation vectors of degree ≤ d polynomials over _2. Given a set S⊆_2^n, the resulting punctured code consists of the evaluation of degree ≤ d polynomials on S. In this context, <ref> can be used to bound the list-size of any puncturing of the Reed-Muller code, showing that for any word w from _2^S, only a small fraction of codewords are within radius 1/2- of w. Another interpretation of <ref> is that any puncturing of the Reed-Muller codes over a set S can be turned into a “small-biased” code without much loss in the rate of the code. Sampling lower bound for polynomial sources. Our low-degree extractor for lower-degree sources (<ref>) has a direct application in distributions that are hard to sample by low-degree polynomials. Indeed, an argument similar to the proof of <cit.>, <ref> implies the existence of a degree-O(d) polynomial p for which the distribution (𝐔, p(𝐔)) cannot be sampled by any degree-d source, where 𝐔∼𝐔_n. Suppose that p is a degree-O(d) polynomial that is an -extractor for the family of degree ≤ 2d sources over {0,1}^n of min-entropy ≥n/2, where =o(1). The existence of such a polynomial p is guaranteed by <ref>. Now suppose that (G(𝐔'), g(𝐔')), where 𝐔'∼𝐔_m for some m≥ 1, is a degree ≤ d source sampling (𝐔, p(𝐔)). In particular, G is an n-bit degree ≤ d source and g is a degree ≤ d polynomial. Consider the n-bit random variable 𝐑=G(𝐔')· g(𝐔')+ 𝐔_n· (1-g(𝐔')). Since 𝐑 is sampled by a degree ≤ 2d source of min-entropy n-O(1), [p(𝐑)=1] = 1/2+o(1). On the other hand, by the definition 𝐑, we have [p(𝐑)=1] ≥1/2+ Ω(1), which is a contradiction. Related Work. An independent and concurrent paper by Alrabiah, Goodman, Mosheiff, and Ribeiro <cit.> proves the existence of low-degree extractors for similar families of sources that are considered in our work, as well as sumset sources. While the proofs are quite different, they both rely on bounds on the dimension of punctured Reed-Muller codes (equivalently the Hilbert function). Acknowledgments. We thank Omar Alrabiah, Jesse Goodman, Jonathan Mosheiff, and João Ribeiro for sharing with us an early draft of their work. We would also like to thank Jesse Goodman and S. Venkitesh for helpful discussions and pointers. We are very grateful to the anonymous reviewers for their comments and pointers to related work. Part of this work was conducted while the second author was visiting the Simons Institute for the Theory of Computing at UC Berkeley; he extends his thanks to the institute for its support and hospitality. § PRELIMINARIES All logarithms in this paper are base 2. By  we denote the set of non-negative integers. For a positive integer n, by [n] we denote the set {1,…,n}. For a prime power q, denote by _q the finite field q elements. For simplicity, throughout this paper, we refer to a polynomial as a degree-d polynomial if its total degree is at most d. When q is a prime power, by qn,d we denote the set of all degree-d polynomials from [X_1,…,X_n] with individual degrees at most q-1. Note that each element of qn,d corresponds to a unique map _q^n →_q. Let r_1,…,r_n≥ 1 be integers and F=∏_i=1^n{0,…,r_i-1}. For x ∈ F and i ∈ [n], x_i denotes the ith coordinate of x. For x∈ F, we define its generalized Hamming weight as |x|∑_i x_i, where the summation is over the integers. For an integer d≥ 0, and a set T⊆ F, we denote the set of its elements of generalized Hamming weight ≤ d by T_≤ d{x∈ T : |x|≤ d} . For a,b ∈ F, we write a≤_P b if a_i≤ b_i for all i∈ [n]. We say a subset T⊆ F is down-closed if for all a,b∈ F such that a≤_P b, if b is in T, then so is a. Similarly, we say a subset T⊆ F is up-closed if for all a,b∈ F such that a≤_P b, if a is in T, then so is b. The lexicographic order ≺ on F is defined as follows. For distinct x, y ∈ F, x precedes y, denoted x≺ y, in lexicographic order if x_i<y_i, where i is the smallest index such that x_i≠ y_i. We will be studying the following quantity. For F=∏_i=1^n{0,…,r_i-1} and k≤ |F|, let ℋ_F(d,k)min_T |T_≤ d| , where the minimum is taken over all down-closed sets T⊆ F with |T|=k. Moreover, denote ℋ_F(d,k) by ℋ^n_q(d,k) in the special case where r_1=…=r_n=q for some q≥ 1. §.§ Probability Distributions We use lowercase letters such as x,y to denote vectors, uppercase bold letters such as X, Y to denote random variables, and 𝒳, 𝒴 to denote families of distributions. By U_n we denote the uniform distribution over {0,1}^n. The statistical distance between two distributions 𝐀 and 𝐁 over a finite domain X is Δ(𝐀, 𝐁)= 1/2(∑_x ∈ X| [x ∈𝐀]- [x ∈𝐁]| ) . We say two distributions 𝐀 and 𝐁 are ε-close if Δ(𝐀, 𝐁) ≤ε. For a distribution 𝐗∼{0,1}^n, the min-entropy of 𝐗 is (𝐗)=min_x ∈support(𝐗)-log ([𝐗=x]) . We will use following forms of Chernoff's and Hoeffding's bounds (see, e.g., <cit.>). Let X_1,…,X_n∈{0,1} be independent random variables. Let X=∑_i=1^nX_i and μ=𝔼(X). Then we have [|X-μ|≥δμ]≤2e^-μδ^2/3 for all 0<δ<1. Let X_1,…,X_n∈[0,1] be independent random variables, X=∑_i=1^n X_i and μ=[X]. Then, [|X-μ|≥ R] ≤ 2e^-2R^2/n. §.§ Randomness Sources, Dispersers, and Extractors A distribution 𝐗∼{0,1}^n is a source from a class 𝒞 of functions, if 𝐗= f(𝐔_m) for some f: {0,1}^m →{0,1}^n ∈𝒞. A distribution 𝐘 is a convex combination of sources 𝐗_i if 𝐘=∑_i p_i 𝐗_i for some non-negative p_i satisfying ∑_i p_i=1, i.e., 𝐘 samples from each 𝐗_i with probability p_i. One of the most powerful classes of sources that we consider in this work is the class of circuits of polynomial size. An circuit is an unbounded-depth Boolean circuit consisting of , , , gates of unbounded fan-in. The size of such a circuit is the number of non-input gates in it. We focus on the class of circuit as it generalizes circuit classes previously studied in this context: unbounded-depth circuits of bounded fan-in from , and bounded-depth circuits of unbounded fan-in from, say, ^0. We remark that we define sources (see <ref>) as sources where each output is computed by an circuit of polynomial size but with an arbitrary (possibly super-polynomial) number of inputs. This explains why in this context and ^0 circuits are incomparable, and why we work with circuits generalizing both of the aforementioned classes. In fact, our results hold even for a larger class of circuits where not only but arbitrary constant-degree polynomials over _2 can be computed at gates (see the discussion at the end of <ref>). Let n,d,m∈, f:{0,1}^m→{0,1}^n, and 𝐗 be a distribution over {0,1}^n that is generated as f(𝐔_m). * 𝐗 is called a d-local source if every output bit of f depends only on at most d of its input bits. * 𝐗 is called a depth-d decision forest source if every output bit of f is determined by a depth-d decision tree of its input variables. * 𝐗 is called a degree-d source if every output bit of f is a degree-d polynomial over _2. * 𝐗 is called a size-n^d circuit source if there is an circuit of size n^d that computes all output bits of f. Note that every d-local source is a depth-d decision forest source, and a degree-d source. Also, every depth-d decision forest source is a degree-d source and a 2^d-local source. We will use the following bounds on the numbers of d-local sources and depth-d decision forest sources. Let n,d≥1. * The number of d-local sources over {0,1}^n is bounded from above by 2^2^d n + 2d nlog n. * The number of depth-d decision forest sources is bounded from above by 2^(d+log n)2^d+1 n. Every d-local source over {0,1}^n can be expressed as f(𝐔_m) where f:{0,1}^m →{0,1}^n is a d-local function, and we may assume without loss of generality that m≤ d n, as the number of relevant variables is bounded from above by d n. Then the number of distinct d-local sources is at most (dnd· 2^2^d)^n≤ 2^2^d n + 2d nlog n . Similarly, for depth-d decision forest sources we may assume that m≤ 2^d n, as each depth-d decision tree depends on at most 2^d variables. The number of depth-d decision trees on m variables is at most (2m)^2^d. Thus, the number of distinct depth-d decision forest sources on {0,1}^n is at most 2^(d+log n)2^d+1 n . For polynomial and circuit sources where the number of input bits cannot be bounded by a small function of n (unlike the sources considered in <ref>), we will need the following bounds on the number of such sources for a fixed number of input bits m. Let n,d,m≥1. * The number of degree-d polynomials f_2^m→_2^n is bounded from above by 2^n·m≤ d. * The number of circuits C{0,1}^m→{0,1}^n of size n^d is bounded from above by 2^4n^d(n^d+m). The bound on the number of degree-d polynomials follows from the observation that the number of multilinear monomials in m variables of degree at most d is m≤ d. For the bound on the number of n^d-size circuits, first notice that each gate of C can be described by m+n^d bits specifying the set of gates feeding it, and 3 bits specifying the function computed at the gate. Additional nlog(n^d+m) bits are sufficient to specify the n output bits of the circuit. Therefore, the total number of such circuits is bounded from above by 2^n^d(m+n^d+3)+nlog(n^d+m)≤ 2^4n^d(n^d+m) . A function Disp:{0,1}^n →{0,1} is a disperser for a family 𝒳 of sources over {0,1}^n with min-entropy k, if for every source X∈𝒳 with (X)≥ k, the support of Disp(X) is {0,1}. A function Ext:{0,1}^n →{0,1}^m is an ε-extractor for a family 𝒳 of sources over {0,1}^n with min-entropy k, if for every source 𝐗∈𝒳 with (𝐗)≥ k, Δ(Ext(𝐗(𝐔_t)),𝐔_m) ≤ε. For clarity of presentation, in this paper when working with sources that are guaranteed to have entropy (𝐗)≥ k, we will always assume that k is an integer. §.§ Hilbert Functions and Standard Monomials In this section, we recall some necessary definitions (see, e.g., <cit.>). Let be a field, X_1,…,X_n be indeterminates, and [X_1,…,X_n] be the polynomial ring in n indeterminates over . For a polynomial f∈𝔽[X_1,…,X_n] and S⊆^n, let f|_S∈^S be the restriction of f to S. For d∈, by Γ_S(d)⊆^S we denote the vector space spanned by f|_S for all degree-d polynomials f: _U(d, _q):={f|_S f∈[X_1,…,X_n], (f)≤ d} . For a set S⊆𝔽^n, the (affine) Hilbert function of S over , _S( · , )→, is defined as the dimension of Γ_S(d) over , i.e., _S(d, ):=_(Γ_S(d)) . Let ≼ be a total order on the monomials in a polynomial ring [X_1,…,X_n]. The order ≼ is called a monomial order if 1 is the minimal element of ≼, and for all monomials m_1, m_2, m satisfying m_1≼ m_2, we have that m_1 m≼ m_2 m. The order ≼ is degree-compatible if for all monomials m_1, m_2 such that (m_1)<(m_2), we have that m_1≼ m_2. Examples of degree-compatible monomial orders include the graded lexicographic and graded reverse lexicographic orders. The graded lexicographic order ≤_grlex and the graded reverse lexicographic order ≤_grevlex are defined as follows. For a pair of monomials m_1=X_1^α_1⋯ X_n^α_n and m_2=X_1^β_1⋯ X_n^β_n, let α=∑_i=1^n α_i, β=∑_i=1^n β_i, and γ=(β_1-α_1,…,β_n-α_n). We have that m_1 ≤_grlex m_2 if and only if either α < β, or α=β and the leftmost non-zero entry of γ is positive. Similarly, m_1 ≤_grevlex m_2 if and only if either α < β, or α=β and the rightmost non-zero entry of γ is negative. For a nonzero polynomial f∈[X_1,…,X_n], the leading monomial of f under a monomial order ≼ is the largest monomial of f under ≼. Let R be a commutative ring (such as the polynomial ring [X_1,…,X_n]). An ideal of R is a subset I of R such that for all a,b∈ I and r∈ R, we have that a+b∈ I and ra∈ I. Let I be an ideal of [X_1,…,X_n], and ≼ be a monomial order. A standard monomial m of I is a monomial in X_1,…,X_n that is not the leading monomial of any nonzero polynomial in I. For an ideal I and d∈, (I) denotes the set of all standard monomials of I, and _≤ d(I) denotes the set of all standard monomials of I of degree at most d: _≤ d(I) = {m∈(I)(m)≤ d} . For a set S⊆^n, by I(S) we denote the ideal of polynomials in [X_1,…,X_n] vanishing on S, I(S)={f∈𝔽[X_1,…,X_n]: f|_S=0^S} . For an ideal I of [X_1,…,X_n], define the set V(I)⊆^n by V(I)={a∈^n: f(a)=0 for all f∈ I} . By definition, for all f∈ I and a∈ V(I), we have f(a)=0. So I⊆ I(V(I)). Finally, for a set S⊆^n, define (S) = (I(S)) and _≤ d(S) = _≤ d(I(S)) . We say that a set T of monomials is down-closed if for all monomials m and m' such that m∈ T and m' divides m, it holds that m'∈ T. It is easy to see that SM(S) is down-closed. Indeed, if m' was the leading monomial of a polynomial p∈ I(S), then m would be the leading monomial of the polynomial p·(m / m') ∈ I(S). We will use the following facts about (S) and _≤ d(S), which are proven, for example, in <cit.> and <cit.>. Let S⊆𝔽^n be a finite set. Then (a) for every monomial order ≼, |S|=|SM(S)| ; (b) for every degree-compatible monomial order ≼ and every d∈, _S(d, )=|SM_≤ d(S)| . § HILBERT FUNCTIONS OF SETS IN FINITE GRIDS Let be a field. We consider Hilbert functions of subsets of a finite grid A=∏_i=1^n A_i, where each A_i is a finite subset of the field . The main result of this section is that the minimum value _S(d,) of a set S⊆ A of size k equals the quantity ℋ_F(d,k) introduced in <ref>, where F=∏_i=1^n {0,1,…, |A_i|-1}. Consider the following setting: Let r_1,…, r_n be integers such that 1≤ r_i≤ || for i∈ [n]. For each i∈ [n], let A_i be a subset of consisting of r_i distinct elements a_i,1,…,a_i,r_i∈. Let A be the Cartesian product ∏_i=1^n A_i. Let ℳ be the set of monomials dividing ∏_i=1^n X_i^r_i-1. Let σ_A be the bijection from ℳ to A defined by σ_A: ∏_i=1^n X_i^e_i↦ (a_1, e_1+1,…, a_n, e_n+1). Finally, fix a degree-compatible monomial order ≼. The next lemma states that every down-closed subset T⊆ℳ can be realized as the set of standard monomials of the set σ_A(T)⊆ A. Let T be a down-closed subset of ℳ. Then SM(σ_A(T))=T. Let I be the ideal of [X_1,…,X_n] generated by the set of polynomials {∏_i=1^n∏_j=1^_X_i(m) (X_i-a_i,j): m∈ℳ∖ T }∪{∏_j=1^r_1 (X_1-a_1,j),…, ∏_j=1^r_n (X_n-a_n,j)} . Let S=V(I), so that I⊆ I(S). As ∏_j=1^r_1 (X_1-a_1,j),…, ∏_j=1^r_n (X_n-a_n,j)∈ I, we have S⊆ V(∏_j=1^r_1 (X_1-a_1,j),…, ∏_j=1^r_n (X_n-a_n,j))=A . Next, we show that (S)⊆ T. Consider m∈(S). Suppose _X_i(m)≥ r_i for some i∈ [n]. Then m is the leading monomial of (m/X_i^r_i)·∏_j=1^r_i (X_i-a_i,j), and the latter is in I(A)⊆ I(S). This contradicts the assumption that m∈(S)=(I(S)). Therefore, in the following we assume that for all i∈ [n], _X_i(m)<r_i , or equivalently, m∈ℳ. Now suppose m∉T. Then by the definition of I, the polynomial ∏_i=1^n∏_j=1^_X_i(m) (X_i-a_i,j) is in I⊆ I(S), and its leading monomial is m. This again contradicts the assumption that m∈(S)=(I(S)). This shows that every m∈(S) also belongs to T. Therefore, (S)⊆ T. Consider arbitrary m_0=∏_i=1^n X_i^e_i∈ T and let a=σ_A(m_0)=(a_1, e_1+1,…, a_n, e_n+1). Consider arbitrary m∈ℳ∖ T. As T is down-closed, there exists k∈ [n] such that _X_k(m)>_X_k(m_0)=e_k. The polynomial ∏_i=1^n∏_j=1^_X_i(m) (X_i-a_i,j) then contains the factor X_k-a_k, e_k+1, so it vanishes at a. As m∈ℳ∖ T is arbitrary and ∏_j=1^r_1 (X_1-a_1,j),…, ∏_j=1^r_n (X_n-a_n,j) also vanish at a, this shows a∈ V(I)=S by the definition of I. As m_0∈ T is arbitrary, we see that σ_A(T)⊆ S. It follows that |T|=|σ_A(T)|≤ |S|=|SM(S)| where the first equality follows from the injectivity of σ_A and the last equality holds by <ref> (a). As (S)⊆ T, we must have SM(S)=T and |σ_A(T)|=|S|. The latter equality together with σ_A(T)⊆ S yields σ_A(T)=S. So SM(σ_A(T))=SM(S)=T. Let k,d∈ℕ such that k≤ |A|. Then min_S⊆ A: |S|=k_S(d, )=min_ T⊆ℳ: |T|=k |{m∈ T: (m)≤ d}| . Let S⊆ A be a set of size k such that _S(d,) achieves the minimum. Let T=(S), which is down-closed. Moreover, as S⊆ A, we have T=(S)⊆(A)=ℳ. Note that |T|=|(S)|=|S|=k by <ref> (a). And |{m∈ T: (m)≤ d}|=|_≤ d(S)|=_S(d,) by <ref> (b). So LHS ≥ RHS. Conversely, let T be a down-closed subset of ℳ of size k such that |{m∈ T: (m)≤ d}| achieves the minimum. Let S=σ_A(T). Then SM(S)=T by <ref>. We have |S|=|(S)|=|T|=k by <ref> (a) and _S(d,)=|_≤ d(S)|=|{m∈ T: (m)≤ d}| by <ref> (b). So LHS ≤ RHS. This finishes the proof of the lemma. Let F=∏_i=1^n {0,1,…,r_i-1}. Let ϕ: ℳ→ F be the bijection ϕ: ∏_i=1^n X_i^e_i↦ (e_1,…,e_n). <ref> can now be reformulated as follows. Let k,d∈ℕ such that k≤ |A|. Then min_S⊆ A: |S|=k_S(d,)=ℋ_F(d,k) . Recall that by <ref>, ℋ_F(d,k)=min_ S⊆ F: |S|=k |S_≤ d| . Identify ℳ with F via the bijection ϕ. The claim follows from <ref> by noting that T⊆ℳ is down-closed iff ϕ(T)⊆ F is down-closed, and that for m∈ℳ, (m)≤ d iff ϕ(m)∈ F_≤ d. For the special case of a finite field =_q, and r_1=…=r_n=q-1, we have A=_q^n, and the right-hand side of <ref> becomes ℋ_q^n(d,k) from <ref>. This leads us to the following corollary. For every n,k,d∈ where k≤ q^n, a prime power q, and every set S⊆_q^n of size |S|=k, we have that _S(d,_q)≥ℋ_q^n(d,k) . Finally, we state the following lemma, which will be used in <ref>. Its proof reuses ideas from the previous proofs in this section. Let n,d∈. Let σ_A: ℳ→ A and ϕ: ℳ→ F be the bijections (<ref>) and (<ref>) respectively. Let S⊆ A such that T:=σ_A^-1(S)⊆ℳ is down-closed. Let T'=ϕ(T)⊆ F. Then _S(d,)=T'_≤ d. By definition, we have σ_A(T)=S. So SM(S)=SM(σ_A(T))=T by <ref>. Then we have _S(d,)=|_≤ d(S)|=|{m∈(S): (m)≤ d}|=|{m∈ T: (m)≤ d}|=|T'_≤ d|. where the first inequality holds by <ref> (b), the third one holds by the fact that (S)=T, and the last one holds since ϕ maps {m∈ T: (m)≤ d} bijectively to T'_≤ d. § NUMBER OF POINTS WITH LOW HAMMING WEIGHT IN DOWN-CLOSED SETS In this section, we will find the exact values of all ℋ_q^n(d,k) which, by <ref>, will give us tight lower bounds on the Hilbert function of sets of size k. For every n,k,q where k≤ q^n, we define M^n_q(k) as the set of the first k elements of {0,…,q-1}^n in lexicographic order. The main result of this section is the following theorem. For every n,k,d, q∈ where k≤ q^n, ℋ^n_q(d,k)= |M^n_q(k)_≤ d| . Combining <ref> and <ref>, we obtain the following bounds on the Hilbert function. For every prime power q, and n,k,d∈ where k≤ q^n, we have min_S⊆_q^n: |S|=k_S(d,_q)= |M^n_q(k)_≤ d| . In particular, setting q=2, for every n,k,d∈ where k≤2^n, and every S⊆_2^n of size |S|=k, _S(d,_2)≥log(k)≤ d . We will use the following notation: For t∈{0,1,…,n}, define 𝒟_q^n(t) to be the set of x∈{0,…,q-1}^n whose first n-t coordinates are zero. Note that for every q,k, and n, 𝒟_q^n(⌊log_q k⌋) ⊆M^n_q(k) ⊆𝒟_q^n(⌈log_q k⌉) . When n and q are clear from the context, we omit the superscript n and the subscript q from M^n_q(k), 𝒟^n_q(t), and ℋ^n_q(d,k). §.§ The Boolean Case, q=2 For a set S⊆{0,1}^n, let min(S) and max(S) be respectively the smallest and the largest strings in S in lexicographic order. We say a set S⊆{0,1}^n is a contiguous k-set if |S|=k and S consists of all x such that min(S)≼ x ≼max(S). We first show that M(k) has the largest number of low Hamming weight strings among all contiguous k-sets. Let n,k,d ∈ be integers such that k ≤ 2^n. Let S^k ⊆{0,1}^n be a contiguous k-set. Then |M(k)_≤ d|≥ |S^k_≤ d|. We prove this lemma by induction on k and max(S^k). The base cases of k≤ 1 hold trivially, as in this cases |M(k)_≤ d|=k. For each k, the base case of minimal max(S^k) holds trivially as in this case S^k=M(k) and |S^k_≤ d|=|M(k)_≤ d|. Assume for the induction hypothesis that the above statement is true for smaller contiguous sets (i.e. |M(t)_≤ d| ≥ |S^t_≤ d| for all contiguous sets S^t of size t ≤ k-1) as well as for contiguous sets A^k of size k having max(A^k) ≺max(S^k). Let m be the smallest integer such that S^k ⊆𝒟(m). If d ≥ m, then |S^k_≤ d|= k. Also, since S^k is a contiguous set such that S^k ⊆𝒟(m), we have M(k) ⊆𝒟(m), implying |M(k)_≤ d|=|M(k)|=k. Thus for the remainder of the proof we assume d<m. If T=S^k∩M(k)≠∅, then the claim follows by applying the induction hypothesis to the contiguous (k-|T|)-set S^k-T. Thus, we may assume from now that S^k∩M(k)=∅. Case 1: Assume x_n-m+1=1 for all x ∈ S^k. Define the contiguous k-set B^k= {x - e_n-m+1: x ∈ S^k} , where e_i denotes the element in {0,1}^n whose ith coordinate is one, and all other coordinates are zeros. Since max(B^k) ≺max(S^k), by the induction hypothesis, |M(k)_≤ d| ≥ |B^k_≤ d|. Also, |B^k_≤ d| ≥ |S^k_≤ d| by definition, thereby implying |M(k)_≤ d| ≥ |S^k_≤ d|. Case 2: Let k^*>0 be the number of elements x in S^k with x_n-m+1=0. Define S^k^*= {x ∈ S^k: x_n-m+1=0} and S^k-k^*= S^k\ S^k^* . Note that in this case, we are guaranteed that S^k-k* and S^k^* are both non-empty contiguous sets. Since k-k^*<k and S^k-k^* is a contiguous (k-k^*)-set, the induction hypothesis gives us |M(k-k^*)_≤ d| ≥ |S^k-k^*_≤ d| . It suffices to show that |T_≤ d| ≥ |S^k^*_≤ d| where T= M(k)\M(k-k^*). For U ⊆{0,1}^n and r ≤ n, define the set γ(U,r)={0^n-r1^r-x: x ∈ U}, where 0^n-r1^r-x denotes coordinate-wise subtraction. Since max(S^k^*)=0^n-m+11^m-1, S^k^* consists of the k^* lexicographically largest elements ≼ 0^n-m+11^m-1. Consequently, γ(S^k^*,m-1)=M(k^*). Note that γ(T,m-1) is a contiguous k^*-set, hence by the induction hypothesis, we get |γ(S^k^*, m-1)_≤ m-1-d| ≥ |γ(T, m-1)_≤ m-1-d| , where we used the fact that d≤ m-1. This implies |T_≤ d| ≥ |S^k^*_≤ d|, as |S^k^*_≤ d|=k^*-|γ(S^k^*, m-1)_≤ m-1-d| and |T_≤ d| = k^*- |γ(T, m-1)_≤ m-1-d|, where we used the fact that max(T)≼ 0^n-m+11^m-1 as M(k)∩ S^k =∅. We now use <ref> to prove that if a contiguous k-set S^k that does not contain any of the first k strings in lexicographic order, then the result of <ref> |S^k_≤ d|≤ |M(k)_≤d| can be strengthened to |S^k_≤ d|≤ |M(k)_≤d-1|. Let n,k,d ∈ be integers such that k ≤ 2^n. Let S^k ⊆{0,1}^n be a contiguous k-set. If S^k∩M(k)=∅, then |M(k)_≤d-1|≥ |S^k_≤ d|. We prove this lemma by induction on k. The base cases of k≤ 1 hold trivially. Suppose the statement is true for contiguous (≤ k-1)-sets. Let S^k be a contiguous k-set and let m be the smallest integer such that S^k ⊆𝒟(m). Case 1: If x_n-m+1=1 for all x ∈ S^k, consider B^k={x - e_n-m+1: x ∈ S^k}. <ref> shows |M(k)_≤ d-1| ≥ |B^k_≤ d-1|= |S^k_≤ d|. Case 2: Define the contiguous k^*-set S^k^*= {x ∈ S^k: x_n-m+1=0}. We denote S^k-k^*=S^k \ S^k^* and T=M(k) \M(k-k^*). * If d ≥ m-1, the claim is trivial since |M(k)_≤ d-1|=|M(k)|=k, as M(k)⊆𝒟(m-1) due to our assumption of S^k∩M(k)=∅. * If d ≤ m-2, using the analysis in Case 1, we get |M(k-k^*)_≤ d-1| ≥ |S^k-k^*_≤ d|. It now suffices to show that |T_≤ d-1| ≥ |S^k^*_≤ d|, where T= M(k)\M(k-k^*). Analogous to the argument made in Case 2 in the proof of <ref>, one can show that M(k^*)= γ(S^k^*, m-1). Since S^k^*∩(M(k)\M(k^*)) = ∅, we also have γ(S^k^*,m-1) ∩γ(M(k)\M(k^*),m-1)= ∅ and by the induction hypothesis, we get |M(k^*)_≤ m-d-1| ≥ |γ(T,m-1)_≤ m-d|. Now, note that |S^k^*_≤ d|= k^*-|M(k^*)_≤ m-d-1|, and since T⊆𝒟(m-1), we have |T_≤ d-1|= k^*-|γ(T,m-1)_≤ m-d|. Combining these, we get |T_≤ d-1| ≥ |S^k^*_≤ d|, as desired. We are finally ready to prove the Boolean case of <ref>. Let S⊆{0,1}^n be a down-closed set of size k. We prove this theorem by a simultaneous induction on k,d≥ 0. For the base cases, we consider pairs (k,d) such that d=0 or k≤ 2^d. The case of d=0 is trivial. For the case where k≤ 2^d, a down-closed set S of size k cannot have strings of Hamming weight > d, thereby showing |S_≤ d|=k. Also, by construction, M(k) is a down-closed set of size k, implying ℋ(d,k)=|M(k)_≤ d|=k in this case. Given d≥ 1 and k> 2^d, assume that the theorem is true for all (k',d') such that either k'<k, or k'=k and d' <d. Suppose S is a down-closed set of size k and let m be the smallest integer such that S ⊆𝒟(m). Define S^0 {x∈ S : x_n-m+1=0} , S^1 {x-e_n-m+1 : x∈ S and x_n-m+1=1} . Since S is down-closed, we have S^1 ⊆ S^0. Moreover, |S_≤ d| = |S^0_≤ d| + |S^1_≤ d-1| . Applying the induction hypothesis for k'=|S^0|<k and d, we get |M(|S^0|)_≤ d| ≤ |S^0_≤ d|. Let T=M(k)\M(|S^0|). Since |S^1| ≤ |S^0|, we have M(|S^1|) ∩ T= ∅, and we may apply <ref> to get |T_≤ d|≤ |M(|S^1|)_≤ d-1|. Now applying the induction hypothesis for k'=|S^1| and d'=d-1, we get |M(|S^1|)_≤ d-1| ≤ |S^1_≤ d-1|. Combining these observations, we get |M(k)_≤ d| =|M(|S^0|)_≤ d|+|T_≤ d| ≤ |S^0_≤ d|+|M(|S^1|)_≤ d-1| ≤ |S^0_≤ d|+|S^1_≤ d-1| =|S_≤ d|. This concludes the induction, and shows that for every k,d≥ 0, and down-closed set S of size k, |M(k)_≤ d| ≤ |S_≤ d|. §.§ The General Case of Finite Grids We prove <ref> in this subsection. In fact, we prove the theorem in a more general setting, described as follows. Let F=∏_i=1^n {0,1,…,r_i-1} where r_1≤ r_2≤…≤ r_n. Let d∈ℕ. We introduce the following notations: For S⊆ F, define ∇(S):={a∈ F: b≤_P a for some b∈ S}, i.e., ∇(S) is the up-closure of S. For k∈{0,…,|F|}, denote by M(k) the set of the smallest k elements of F in lexicographic order. And for r∈{0,…,|F_≤ d|}, denote by L_≤ d(r) the set of the largest r elements of F_≤ d in lexicographic order. The main result of this subsection is the following generalization of <ref>. For every k∈ℕ such that k≤ |F|, ℋ_F(d,k)=|M(k)_≤ d| . We derive <ref> from a combinatorial result of Beelen and Datta <cit.>, which generalizes the earlier work of Wei <cit.> and Heijnen–Pellikaan <cit.>. Let S⊆ F_≤ d and r=|S|. Then |∇(L_≤ d(r))|≤ |∇(S)|.[In <cit.>, L_≤ d(r) is denoted by M(r), while we use M(r) to denote the set of the smallest r elements of F in lexicographic order.] Define Δ(S):=F∖∇(S) for S⊆ F. The next lemma gives a characterization of Δ(S). Let T⊆ F_≤ d be down-closed and S=F_≤ d∖ T. Then Δ(S) is the unique maximal set with respect to inclusion among all down-closed subsets U of F satisfying U_≤ d=T. Suppose ∇(S)∩ T contains an element a. Then b≤_P a for some b∈ S. As T is down-closed and a∈ T, we have b∈ T and hence b∈ S∩ T, contradicting the fact that S=F_≤ d∖ T. So ∇(S)∩ T=∅. Therefore, T⊆ F_≤ d∖∇(S). On the other hand, we have F_≤ d∖∇(S)⊆ F_≤ d∖ S⊆ T. So T=F_≤ d∖∇(S). By definition, ∇(S) is up-closed. So Δ(S)=F∖∇(S) is down-closed. And Δ(S)_≤ d=F_≤ d∖∇(S)=T by definition. Finally, consider an arbitrary down-closed set U⊆ F satisfying U_≤ d=T. Then S∩ U=∅. Applying the first part of the proof to U in place of T then shows that ∇(S)∩ U=∅. So U⊆ F∖∇(S)=Δ(S). This proves the unique maximality of Δ(S) with respect to inclusion. Let r∈{0,…,|F_≤ d|} and k=|Δ(L_≤ d(r))|. Then Δ(L_≤ d(r))=M(k). If r=0, then Δ(L_≤ d(r))=Δ(∅)=F∖∇(∅)=F, and the lemma trivially holds. So assume r>0. Let x=(x_1,…,x_n) be the smallest element in L_≤ d(r). Let U be the subset of elements in F smaller than x in lexicographic order. Then U∩∇(L_≤ d(r))=∅ and hence U⊆Δ(L_≤ d(r)). It suffices to show that U=Δ(L_≤ d(r)). Consider arbitrary y=(y_1,…,y_n)∈ F∖ U. Then y is at least x in lexicographic order. We claim that y∈∇(L_≤ d(r)). If y=x, then it is in L_≤ d(r)⊆∇(L_≤ d(r)), so the claim holds in this case. Now suppose y is greater than x in lexicographic order. Let i∈ [n] be the smallest integer such that y_i≠ x_i. We have y_i>x_i. Let x'=(x_1,…,x_i-1,x_i+1,0,…,0). Suppose x_i+1,…,x_n are not all zero. Then the generalized Hamming weight of x' is bounded by that of x. So x'∈L_≤ d(r). As x'≤_P y, we have y∈∇(L_≤ d(r)). Now suppose x_i+1=…=x_n=0. Then x≤_P y. Again, this implies y∈∇(L_≤ d(r)). So the claim y∈∇(L_≤ d(r)) holds in all cases. Therefore, y∉Δ(L_≤ d(r)). As y∈ F∖ U is arbitrary, this shows U⊇Δ(L_≤ d(r)) and hence U=Δ(L_≤ d(r)). Now we are ready to prove <ref>. For k∈{0,1,…, |F|}, define f(k)=|M(k)_≤ d|. Then f(0)=0, f(|F|)=|F_≤ d|, and f(k)≤ f(k+1)≤ f(k)+1 for k∈{0,…,|F|-1}. For s∈{0,1,…,|F_≤ d|}, let g(s) be the largest integer in {0,1,…, |F|} such that f(g(s))=s. Note that by (<ref>), g(·) is a well-defined, strictly monotonically increasing function and g(|F_≤ d|)=|F|. Fix k≤ |F|. By the definition of ℋ_F(d,k), we want to show that the smallest possible value of |T_≤ d| over all down-closed sets T⊆ F of size k is |M(k)_≤ d|. Obviously, this value is attained by choosing T=M(k). Now assume to the contrary that there exists a down-closed set T⊆ F of size k such that |T_≤ d|<|M(k)_≤ d|, i.e., |T_≤ d|<f(k). Let s be the smallest integer in {0,1,…,|F_≤ d|} such that g(s)≥ k. Such s exists since g(|F_≤ d|)=|F|. Then |T_≤ d|<f(k)≤ f(g(s))=s where the second step uses the monotonicity of f(·). We must have f(k)>s-1, since otherwise we would have s≥ 1 and f(k)=s-1, which implies g(s-1)≥ k by the definition of g(·). But this contradicts the choice of s. Therefore, f(k)=s. Let S=F_≤ d∖ T. Let s'=|T_≤ d|<s and r=|S|=|F_≤ d|-s'. Note that T_≤ d is down-closed since T and F_≤ d are. Applying <ref> to T_≤ d shows that Δ(S) is the largest down-closed subset U of F satisfying U_≤ d=T_≤ d. As T is another set satisfying this property, we have |Δ(S)|≥ |T|=k . By <ref>, |∇(L_≤ d(r))|≤ |∇(S)|. So |Δ(L_≤ d(r))|≥ |Δ(S)|. Combining this with (<ref>) gives Δ(L_≤ d(r))≥ k . Let k'=|Δ(L_≤ d(r))|≥ k. By <ref>, Δ(L_≤ d(r))=M(k'). Then f(k')=|M(k')_≤ d|=|Δ(L_≤ d(r))_≤ d|. And Δ(L_≤ d(r))_≤ d=F_≤ d∖L_≤ d(r) by <ref>. So f(k')=|F_≤ d∖L_≤ d(r)|=|F_≤ d|-r=s'. In summary, we have s'<s, k'≥ k, f(k)=s, and f(k')=s', contradicting the monotonicity of f(·) as stated by (<ref>). § A TIGHT BOUND ON THE SIZE OF DEGREE-D CLOSURES OF SETS For n,d,δ∈, denote by N(n,d,δ) the number of monomials X_1^e_1⋯ X_n^e_n with e_1,…,e_n≤δ and e_1+…+e_n≤ d. For example, N(n,d,1)=n≤ d and N(n,d,δ)=n+dd for d≤δ. __q^n(d, _q)=N(n,d,q-1). By multivariate Lagrange interpolation, every function on _q^n can be uniquely written as a linear combination of X_1^e_1⋯ X_n^e_n|__q^n with e_1,…,e_n≤ q-1 over _q. So the set {X_1^e_1⋯ X_n^e_n|__q^n: e_1,…,e_n≤ q-1, e_1+…+e_n≤ d} is a basis of Γ__q^n(d) of size N(n,d,q-1). In particular, <ref>, which was proved by Nie and Wang <cit.>, can be restated as |_d(T)| ≤q^n/__q^n(d, _q)· |T| = q^n/N(n,d,q-1)· |T| . We now give the following tight bound on the size of the degree-d closure of a set T⊆_q^n, improving (<ref>). Let n,d,m∈. Let T⊆_q^n be a set of size m. Then |_d(T)|≤max_0≤ k≤ q^n: |M^n_q(k)_≤ d|≤ m k = max_0≤ k≤ q^n: |M^n_q(k)_≤ d|=m k if m≤ N(n,d,q-1), q^n otherwise. Note that |M^n_q(q^n)_≤ d|=N(n,d,q-1) and that |M^n_q(k-1)_≤ d|≤ |M^n_q(k)_≤ d|≤ |M^n_q(k-1)_≤ d|+1 for 1≤ k≤ q^n. It follows that max_0≤ k≤ q^n:|M^n_q(k)_≤ d|≤ m k = max_0≤ k≤ q^n: |M^n_q(k)_≤ d|=m k if m≤ N(n,d,q-1), q^n otherwise. This is because if |M^n_q(k)_≤ d|<m and k<q^n, we may always increase k by one, which increases |M^n_q(k)_≤ d| by at most one. Let T⊆_q^n be a set of size m. Let U=_d(T) and k_U=|U|. We want to prove k_U≤max_0≤ k≤ q^n:|M^n_q(k)_≤ d|≤ m k. Assume to the contrary that this is not true. Then |M^n_q(k_U)_≤ d|>m. So we have _U(d)≥ |M^n_q(k_U)_≤ d|>m=|T|≥_d(T) , where the first inequality holds by <ref>. On the other hand, the fact that U=_d(T) implies that _U(d)=_T(d), contradicting (<ref>). So |_d(T)|= k_U≤max_0≤ k≤ q^n:|M^n_q(k)_≤ d|≤ m k. The next theorem states that the bound in <ref> is tight and explicitly constructs sets that meet this bound. Let σ_A: ℳ→ A and ϕ: ℳ→ F be the bijections (<ref>) and (<ref>) respectively, where A=_q^n, F={0,1,…,q-1}^n, and ℳ={∏_i=1^n X_i^e_i: 0≤ e_1,…,e_n≤ q-1}. Let m be any integer such that 0≤ m≤ q^n. Choose the maximum k≤ q^n such that |M^n_q(k)_≤ d|≤ m. Let T_0=(σ_A∘ϕ^-1)(M_q^n(k)_≤ d)⊆ A=_q^n. If |T_0|≥ m, let T=T_0. Otherwise, let T be an arbitrary set obtained by adding m-|T_0| elements from _q^n∖ T_0 to T_0. Then T is a set of size m that attains the equality in (<ref>). Let S=(σ_A∘ϕ^-1)(M_q^n(k))⊇ T_0. Then ϕ(σ_A^-1(S))=M_q^n(k). By the definitions of M_q^n(k) and ϕ, the set σ_A^-1(S)=ϕ^-1(M_q^n(k))⊆ℳ is down-closed. So by <ref>, we have _S(d)=|ϕ(σ_A^-1(S))_≤ d|=|M_q^n(k)_≤ d| . Similarly, by the definitions of M_q^n(k)_≤ d and ϕ, the set σ_A^-1(T_0)=ϕ^-1(M_q^n(k)_≤ d)⊆ℳ is down-closed. So by <ref>, we have _T_0(d)=|ϕ(σ_A^-1(T_0))_≤ d|=|(M_q^n(k)_≤ d)_≤ d|=|M_q^n(k)_≤ d| . It follows that _S(d)=_T_0(d). So we have S⊆_d(T_0). Therefore, |_d(T)|≥ |_d(T_0)|≥ |S|=|M_q^n(k)|=k= max_0≤ k'≤ q^n: |M^n_q(k')_≤ d|≤ m k' , where the last equality holds by the choice of k. Suppose m>N(n,d,q-1). Then k=q^n. So |T_0|=|M_q^n(k)_≤ d|=N(n,d,q-1)<m. Then |T|=m by definition. As |_d(T)|≥ k=q^n, we must have |_d(T)|=q^n. So T attains the equality in (<ref>). Now suppose m≤ N(n,d,q-1). Then |T_0|=|M^n_q(k)_≤ d|=m by (<ref>) and the maximality of k. So T=T_0 and |T|=m. Combining (<ref>) and (<ref>) shows that (<ref>) holds with equality. § LOW-DEGREE DISPERSERS In this section, we will show how to use <ref> to conclude the existence of low-degree dispersers for various families of sources. In <ref>, we will use <ref> to show that for every family of at most 2^O(k^d) sources of min-entropy k, there exists a degree-d disperser. In particular, this will imply dispersers for local sources and bounded-depth decision forest sources. In <ref>, we will extend this result to large families of sources, including polynomial and circuit sources. §.§ Dispersers for Small Families of Sources In <ref>, we use the bound of <ref> on the values of Hilbert functions to bound the probability that a random polynomial takes a fixed value on an arbitrary subset of _2^n. Let n,d≥1, S⊆_2^n be an arbitrary nonempty set, and f:S→_2 be a function. Then, _p∈_u 2n,d[p|_S ≡ f] ≤ 2^-_S(d,_2)≤ 2^- ⌊log_2 |S|⌋≤ d . Given a function f, let V_f be the subset of 2n,d consisting of polynomials p such that p|_S≡ f. Note that V_0 is a subspace of 2n,d, and for every f:S→_2, V_f is either empty or is a coset of the subspace V_0. Thus by definition, for every f, _p∈_u 2n,d[p|_S ≡ f]≤|V_0|/| 2n,d|=2^-_S(d,_2) . The second inequality in <ref> follows from <ref>. We will now use <ref> to prove the existence of low-degree dispersers for every small family of sources. Let n,d,k≥1, and 𝒳 be a family of distributions of min-entropy ≥ k over {0,1}^n. Then a uniformly random polynomial p∈2n,d is a disperser for 𝒳 with probability at least 1-|𝒳|· 2^1-k≤ d . Let 𝐗 be a distribution from 𝒳. Since (𝐗)≥ k, we have that |support(𝐗)|≥ 2^k. By <ref>, _p∈_u 2n,d[p|_support(𝐗) is constant] ≤ 2^1-k≤ d . The corollary follows by applying the union bound over all |𝒳| sources in 𝒳. We will demonstrate two immediate applications of <ref> for the families of local and decision forests sources. Let 1≤ℓ≤ d≤ n be integers. There exists p∈2n,d that is a disperser * for the family of ℓ-local sources on {0,1}^n with min-entropy k> d(2^ℓ n + 2ℓ n log n)^1/d. * for the family of depth-ℓ decision forest sources on {0,1}^n with min-entropy k> d((ℓ+log n) 2^ℓ+1 n)^1/d. Let 𝒳_ℓ denote the family of ℓ-local sources on {0,1}^n. By <ref>, |𝒳_ℓ|≤ 2^2^ℓ n + 2ℓ nlog n . Thus, by <ref>, as long as k≤ d>2^ℓ n+2ℓ nlog n+1, there exists a degree-d disperser for ℓ-local sources of min-entropy k. For k> d(2^ℓ n + 2ℓ n log n)^1/d, we have that k≤ d≥kd+1 ≥ (k/d)^d+1 > 2^ℓ n+2ℓ nlog n+1 . For depth-ℓ decision forest sources, by <ref>, the number of such sources on {0,1}^n is at most 2^(ℓ+log n) 2^ℓ+1 n . By <ref>, as long as k≤ d>(ℓ+log n) 2^ℓ+1 n+1, there exists a degree-d disperser for depth-ℓ decision forest sources of min-entropy k. For k> d((ℓ+log n) 2^ℓ+1 n)^1/d, we have that k≤ d≥kd+1 ≥ (k/d)^d+1 > (ℓ+log n) 2^ℓ+1 n+1 . The recent result of <cit.> uses further properties of local sources to prove the existence of low-degree dispersers for local sources with min-entropy k≥ c ℓ^3 d· (nlog n)^1/d for a constant c>0. Noting that every depth-ℓ decision forest source is also a (2^ℓ-1)-local source, the disperser of <cit.> for local sources implies a result similar to the above. §.§ Dispersers for Polynomial and Circuit Sources In this section, we will extend the results of the previous section to prove the existence of low-degree dispersers for powerful families of sources including polynomial-size circuits and low-degree polynomial sources. Unlike the previous examples such as local sources, the sources considered here may non-trivially depend on an arbitrary number of inputs. For example, even a degree-1 (i.e. affine) source defined by an affine map f:_2^m →_2^n can depend on an arbitrary number m≫ n of input bits. We get around this by restricting the map f:{0,1}^m→{0,1}^n defining the source to a low-dimensional affine subspace. Specifically, we will use the input-reduction procedure from <cit.>, where it was used to prove that random (not necessarily bounded degree) maps extract from low-degree sources. Let m,n,k∈, k>1, and f_2^m→_2^n be a function. If (f(𝐔_m))≥ k, then there exists an affine map L_2^11k→_2^m such that (f(L(𝐔_11k)))≥ k-1 . Equipped with <ref>, we are ready to construct dispersers for low-degree sources. Let 1≤ℓ< d≤ n be integers. There exists p∈2n,d that is a disperser for the family of degree-ℓ sources on {0,1}^n with min-entropy k≥ (12^ℓ· d^d· n)^1/d-ℓ+1. In particular, for every ℓ∈, there is a degree-(ℓ+2) disperser for degree-ℓ sources on {0,1}^n with min-entropy Ω(√(n)). Suppose 𝐗=f(𝐔_m) is a source of min-entropy (𝐗)≥ k defined by a degree-ℓ map f:_2^m →_2^n. We first reduce the number of inputs of the polynomial f. Specifically, we construct another degree-ℓ polynomial g with r=11k inputs such that every disperser for g is also a disperser for f. We will then conclude the proof by showing that there exists a degree-d disperser for the class of sources defined by polynomials with r inputs. Let L_2^r→_2^m be the affine map from <ref>. Consider the new source 𝐘=f(L(U_r)) over {0,1}^n. By <ref>, (𝐘)≥ k-1, and the map f∘ L defining 𝐘 is also a degree-ℓ polynomial with only r input variables. Since Support(𝐘)⊆Support(𝐗), a disperser for 𝐘 is also a disperser for 𝐗. Therefore, it is now sufficient to prove that there exists a degree-d polynomial p that is a disperser for all degree-ℓ sources 𝐘 with (𝐘)≥ k-1 defined by polynomials with r inputs. By <ref>, the number of such sources is bounded from above by 2^n·r≤ℓ. Now, <ref> guarantees the existence of a degree-d disperser as long as k-1≤ d -1 > n·r≤ℓ = n·11k≤ℓ . For k≥ (12^ℓ· d^d· n)^1/d-ℓ+1, we have that k-1≤ d -1 > k-1d ≥((k-1)/d)^d ≥ (k-1)^ℓ· (k-1)^d-ℓ/d^d ≥ (k-1)^ℓ· (12^ℓ· d^d· n) /d^d =n(12(k-1))^ℓ ≥ n((11k)^ℓ+1) ≥ n ·11k≤ℓ , which finishes the proof of the theorem. Let ℓ≥1 and n≥ d≥ 2ℓ+2 be integers. There exists p∈2n,d that is a disperser for the family of n^ℓ-size circuit sources on {0,1}^n with min-entropy k≥ (30^2· d^d· n^2ℓ)^1/d-2+1. Let 𝐗=C(𝐔_m) be a source of min-entropy (𝐗)≥ k defined by an circuit, C:_2^m →_2^n of size n^ℓ. Similarly to the proof of <ref>, we first reduce the number of inputs of the circuit C. Let L_2^r→_2^m be the affine map from <ref> for r=11k. Consider the circuit source 𝐘=C(L(𝐔_r)). By <ref>, (𝐘)≥ k-1, and since Support(𝐘)⊆Support(𝐗), a disperser for 𝐘 is also a disperser for 𝐗. We will now show that there is an circuit C'_2^r→_2^n of (C')≤ rn^ℓ computing C'(y)=C(L(y)) for every y=(y_1,…,y_r)∈_2^r. The main difference between the circuits C and C' is that we need to replace each input x_i for i∈[m] of the circuit C by an affine form L_i(y_1,…,y_r) of the inputs of C' given by L. The naive way of implementing this modification would require m≫ n^ℓ additional gates which we cannot afford. Instead, we will simulate this modification by updating the input wires of each gate in C' accordingly (and introducing at most r additional gates for each gate in C). Every gate of C fed by inputs x_i_1,…,x_i_t will now be replaced by an gate in C' computing L_i_1(y_1,…,y_r)⊕…⊕ L_i_t(y_1,…,y_r). Note that this operation only modifies the wires in the circuit and does not introduce new gates. Every gate of C fed by inputs x_i_1,…,x_i_t will now be replaced by a gate computing of at most r linearly independent forms from L_i_1,…,L_i_t. This operation introduces at most r additional gates for each gate of the original circuit C. We handle gates in a similar fashion. Finally, the wires between non-input gates of the circuit C stay unchanged in the circuit C'. We constructed a circuit of size rn^ℓ with r inputs that defines the source 𝐘. It remains to show the existence of a degree-d polynomial p that is a disperser for all sources 𝐘 with (𝐘)≥ k-1 defined by rn^ℓ-size circuits with r inputs. By <ref>, the number of such sources is bounded from above by 2^4rn^ℓ(rn^ℓ+r) = 2^4r^2n^ℓ(n^ℓ+1) = 2^4(11k)^2 n^ℓ(n^ℓ+1)≤ 2^(27k)^2 n^2ℓ . Now, <ref> guarantees the existence of a degree-d disperser as long as k-1≤ d -1 > (27k)^2 n^2ℓ . For k≥ (30^2· d^d· n^2ℓ)^1/d-2+1, we have that k-1≤ d -1 > k-1d ≥((k-1)/d)^d ≥ (k-1)^2· (k-1)^d-2 / d^d ≥ (k-1)^2 · (30^2· d^d· n^2ℓ) / d^d ≥ (30(k-1))^2 n^2ℓ ≥ (27k)^2 n^2ℓ , which finishes the proof. <ref> construct low-degree dispersers for sources generated by constant-degree polynomials and polynomial-size circuits. These two classes of sources are incomparable. Indeed, computes (x_1,…,x_m) which is not a constant-degree polynomial, while constant-degree polynomials compute polynomials in m inputs which do not admit circuits of size polynomial in n. We remark that the techniques of <ref> can be used to conclude the same result for a class of sources that generalizes both and constant-degree polynomials. This is the class of polynomial-size circuits which extends with gates computing arbitrary polynomials in m inputs of a fixed constant degree. For ease of exposition, we present only the results for more natural sources in <ref>. § RANDOM LOW-DEGREE POLYNOMIALS EXTRACT FROM FIXED SOURCES In this section, we use our bounds on the values of Hilbert functions to prove the existence of a low-degree extractor for a fixed high min-entropy source. Specifically, in <ref> we show that for every source 𝐗 of high min-entropy, a random low-degree polynomial p has bias ≤, i.e., _x∈_u X[f(x)=1]∈ 1/2± with high probability. One special case of interest is the case of k-flat sources 𝐗 which are uniform distributions over sets of size 2^k. In <ref>, we will use <ref> to prove the existence of low-degree extractors for various expressive families of sources. We start this section by using our bounds on the degree-d closure of sets in order to lower-bound the probability that a random somewhat large subset T of a set S has “full Hilbert dimension”, i.e., _T(d, _2)=|T|. We then use this to prove <ref> which states that for a large enough set S⊆{0,1}^n, a random subset T⊆ S of full Hilbert dimension will contain each element x∈ S with almost the same probability. Finally, we present a proof of <ref> which crucially relies on <ref>. Let 1≤ d≤ n, d≤ℓ, and S⊆{0,1}^n. Let T be a uniformly random subset of S of size ℓ≤ d. Then _T[_T(d, _2)=|T|=ℓ≤ d] ≥ 1-ℓ≤ d·2^ℓ/|S| . For a random subset T={a_1,a_2,…,a_ℓ≤ d} and j∈[ℓ≤ d], let T^j={a_1,…,a_j}. We prove that with high probability, _T^j+1(d, _2)=_T^j(d, _2)+1 holds for every j. Indeed, for every j<ℓ≤ d, we have that [_T^j+1(d, _2)=_T^j(d,_2)+1] =[a_j+1∉_d(T^j)] =1-|_d(T^j)|/|S| ≥ 1-2^ℓ/|S| , where the last inequality uses <ref>. The claim now follows by the union bound over all j. Let 1≤ d≤ n, d≤ℓ, and S⊆{0,1}^n. Let T be a uniformly random subset of S of size ℓ≤ d. Then for every x∈ S, (1-δ)·ℓ≤ d/|S|≤_T[x∈ T |_T(d,_2)=|T|] ≤1/(1-δ)·ℓ≤ d/|S| , where δ=ℓ≤ d·2^ℓ/|S|. By the Bayes rule, we have _T[x∈ T |_T(d,_2)=|T|] =[_T(d,_2)=|T| | x∈ T]·[x∈ T]/[_T(d,_2)=|T|] =[_T(d,_2)=|T| | x∈ T]/[_T(d,_2)=|T|]·ℓ≤ d/|S| . For the upper bound, by <ref>, [_T(d,_2)=|T|]≥ 1- ℓ≤ d·2^ℓ/|S|=1-δ. Thus, [x∈ T | _T(d,_2)=|T|]≤1/(1-δ)·ℓ≤ d/|S| . For the lower bound, first include x in T, and then randomly pick the remaining ℓ≤ d-1 elements in T. Analogous to the analysis in <ref>, we have _T[_T(d,_2)=|T| | x∈ T]≥ 1- ℓ≤ d·2^ℓ/|S|=1-δ. Thus, _T[x∈ T | _T(d,_2)=|T|] ≥ (1-δ)·ℓ≤ d/|S| , which concludes the proof. Equipped with <ref>, we are ready to present the proof of <ref>. Let n,d,k≥1, and >0 be a real. Then for every distribution 𝐗 over {0,1}^n with (𝐗)≥ k, a uniformly random degree-d polynomial f is an -extractor for 𝐗, _x∼𝐗[f(x)=1] = 1/2± with probability at least 1-e^3n-^2ℓ≤ d/(Cn^2) where ℓ=k/2-log(32n/) and C=7·(32)^2. We will assume that ℓ k/2-log(32n/)>0 (and, in particular, that ≥2^-n) as otherwise the theorem statement holds trivially. Similarly, we will assume that 3n-^2ℓ≤ d/(Cn^2)<0, and, in particular, that 2^n+1· e^-(/(32n))^2 ·ℓ≤ d/7<1. Let 𝐗 be a distribution over {0,1}^n with (𝐗)≥ k, and let S=support(𝐗)⊆{0,1}^n. For x∈ S, let p_x=[𝐗=x], so that p_x≤ 2^-k and ∑_x∈ Sp_x = 1. For every i≥ k, define S_i{x∈ S : 2^-i-1< p_x≤ 2^-i} , and note that |S_i|≤ 2^i+1. We will show that with high probability a random degree-d polynomial f satisfies ∑_i≥ k|∑_x∈ S_i p_x f(x)-1/2∑_x∈ S_i p_x| ≤ . Given <ref>, we finish the proof of the theorem as follows. |_x∼ X[f(x)=1]-1/2| = |∑_x∈ S p_x f(x)-1/2| =|∑_x∈ S p_x f(x)-1/2∑_x∈ S p_x| ≤∑_i≥ k|∑_x∈ S_i p_x f(x)-1/2∑_x∈ S_i p_x| ≤ . It remains to prove <ref>, and we will do this by analyzing the terms for different values of i separately. Large i. Let S_L be the union of all S_i with i≥ 2n+1. Note that ∑_x∈ S_L p_x ≤ 2^n · 2^-2n-1≤/2 , where we used that ≥ 2^-n. Thus, in the rest of the proof we will only deal with k≤ i≤ 2n. Specifically, we will show that for each i in this interval, |∑_x∈ S_i p_x f(x)-1/2∑_x∈ S_i p_x|≤ 8' for '/32n . We will assume that ≤ 1/2 (as the theorem statement is trivial otherwise), and, thus, '≤ 1/64. Small S_i. If |S_i|≤ 2^2ℓ+1/', then ∑_x∈ S_ip_x ≤ 2^2ℓ+1-i/' ≤ 2^2ℓ+1-k/'≤' , where we used that i≥ k and ℓ= k/2-log(32n/)=k/2-log(1/'). Small i and large S_i. Let I be the set of i≤ 2n such that |S_i|> 2^2ℓ+1/'. Fix an i∈ I and let t=|S_i|. We independently and uniformly at random pick subsets T_1,…, T_t of S_i of size |T_j|=ℓ≤ d satisfying _T_j(d,_2)=|T_j|. For any x∈ S_i, let n_x denote the number of sets T_j containing x: n_x=|{j∈[t] x∈ T_j}|. Note that by <ref>, (1-δ)·ℓ≤ d≤[n_x] ≤1/(1-δ)·ℓ≤ d , where δ=ℓ≤ d·2^ℓ/|S_i|. Since |S_i|> 2^2ℓ+1/', we have that δ≤'/2≤ 1/2, and [n_x] = (1±')·ℓ≤ d . Applying the Chernoff bound for the concentration of n_x and the union bound over all t choices of x∈ S_i, we have that _T_1,…, T_t[ ∃ x∈ S_i, n_x∉ (1± 2')·ℓ≤ d]≤ 2 t · e^-'^2 ·ℓ≤ d/7≤ 2^n+1· e^-(/(32n))^2 ·ℓ≤ d/7 <1 . In particular, there exists a choice of T_1,…,T_t such that _T_j(d,_2)=|T_j| for all j∈[t], and each n_x=(1± 2')·ℓ≤ d. Let us fix this choice of sets T_1,…,T_t. For a fixed set T_j such that _T_j(d,_2)=|T_j|, a random degree-d polynomial f satisfies _f∈_u 2n,d[| ∑_x∈ T_j p_x f(x) - 1/2∑_x∈ T_j p_x |≥' ℓ≤ d/2^i]≤ 2e^-2'^2 ℓ≤ d . Indeed, _T_j(d,_2)=|T_j| implies that a random degree-d polynomial f induces a random map f|_T_j T_j→{0,1} (see, e.g., the proof of <ref>). <ref> now follows from the Hoeffding bound, as the random variables 2^i p_xf(x) are ℓ≤ d independent [0,1]-valued random variables with mean 2^i-1 p_x. Taking the union bound over all j∈[t], we have that _f∈_u 2n,d[∀ j∈[t], | ∑_x∈ T_j p_x f(x) - 1/2∑_x∈ T_j p_x |≤' ℓ≤ d/2^i]≥ 1-2te^-2'^2 ℓ≤ d . Therefore, using t≤ 2^n, with probability at least 1-2^n+1· e^-2'^2 ·ℓ≤ d , we simultaneously have that (i) for every x∈ S_i, n_x=(1± 2')·ℓ≤ d, and (ii) for every j∈[t], | ∑_x∈ T_j p_x f(x) - 1/2∑_x∈ T_j p_x |≤' ℓ≤ d/2^i . Conditioning on this good event, we get ∑_x∈ S_i p_x f(x) = ∑_x∈ S_in_x/n_x p_x f(x) = 1/(1± 2')·ℓ≤ d·∑_j∈ [t]∑_x∈ T_j p_x f(x) = 1/(1± 2')·ℓ≤ d·∑_j∈ [t]((1/2∑_x∈ T_j p_x) ±' ℓ≤ d/2^i) = 1/(1± 2')·ℓ≤ d·(1/2∑_x∈ S_in_xp_x)±' t/(1± 2') ·2^i = (1± 2')/(1± 2')·(1/2∑_x∈ S_in_xp_x)±2'/(1± 2') = (1/2∑_x∈ S_ip_x )± 8', where the penultimate equality uses t=|S_i|≤ 2^i+1. Finally, we apply the union bound over all i∈ I. Since |I|≤ 2n, combining the contributions from small S_i's and large i's, we conclude that | 1/2∑_x∈ S_ip_x - ∑_x∈ S_i p_x f(x)| ≤/2+ 8'·|I|≤ with probability at least 1-n· 2^n+2· e^-2'^2 ·ℓ≤ d≥ 1 - e^3n-^2 ·ℓ≤ d/(512n^2)≥ 1 - e^3n-^2 ·ℓ≤ d/(Cn^2) . § LOW-DEGREE EXTRACTORS In this section, we extend the results of <ref> to the setting of extractors. We start with the extractors version of <ref> in <ref>, where we show that low-degree polynomials extract from small families of sources. Then, in <ref>, we use <ref> to prove the existence of low-degree extractors for a number of families of sources. Finally, in <ref>, we prove the existence of low-degree extractors with multi-bit outputs. Let 𝒳 be a family of distributions of min-entropy k≥5logn over {0,1}^n for large enough n. Let 𝒴 be a family of distributions each of which is '-close to a convex combination of distributions from 𝒳. Then for every d≥6, a uniformly random polynomial p∈2n,d is an -extractor for 𝒴 with probability at least 1-|𝒳|· e^3n-30k^d/2/n^2 for =(2d/k^1/4)^d+'. Let δ=-'=(2d/k^1/4)^d. We first bound the probability that a random polynomial is a δ-extractor for a fixed source 𝐗∈𝒳, and then apply the union bound over all sources from 𝒳. Let ℓ=k/2-log(32n/δ)≥ k/3 for all large enough n. Let C=7·(32)^2. By <ref>, we have that the probability that a random degree-d polynomial is not a δ-extractor for a fixed source 𝐗∈𝒳 is at most e^3n-δ^2ℓ≤ d/(Cn^2) ≤ e^3n - (2d)^2d· k^-d/2·(ℓ/d)^d/(Cn^2) ≤ e^3n-30k^d/2/n^2 . Now the union bound over all sources from 𝒳 gives us that a random degree-d polynomial is a δ-extractor for every source in 𝒳 with probability at least 1-|𝒳|· e^3n-30k^d/2/n^2 . Finally, each polynomial that δ-extracts from 𝒳 is also an -extractor for all sources in 𝒴. We will use the following input-reduction result from <cit.>. Let m,n,k∈, k>1, and f_2^m→_2^n be a function. If (f(𝐔_m))≥ k, then there exist affine maps L_1,…,L_t_2^11k→_2^m such that the distribution f(𝐔_m) is 2^-k-close to a convex combination of distributions f(L_i(𝐔_11k)). Moreover, for each i∈[t], (f(L_i(𝐔_11k)))≥ k-1 . We are now ready to prove that low-degree polynomials extract from many sources of interest. For all ℓ,d≥1, and all large enough n, there exists p∈2n,d that is an -extractor for the following families of sources over {0,1}^n of min-entropy k≥5logn for =2(2d/k^1/4)^d. * ℓ-local sources for k≥ (2^ℓ n^3logn)^2/d. * depth-ℓ decision forest sources for k≥ (2^ℓ n^3(logn+ℓ))^2/d. * degree-ℓ sources for k≥(3^ℓ n)^6/d-2ℓ. * n^ℓ-size circuit sources for k≥ 3n^4(ℓ+1)/d-4. By <ref>, the number of ℓ-local sources over {0,1}^n is at most 2^2^ℓ n + 2ℓ nlog n, and the number of depth-ℓ decision forest sources is at most 2^(ℓ+log n)2^ℓ+1 n. Now <ref> implies the result for these classes of sources. Let 𝒴 be the family of degree-ℓ sources. We first apply <ref>, and obtain the family 𝒳 of degree-ℓ sources such that each source in 𝒴 is 2^-k-close to a convex combination of sources in 𝒳. Moreover, each source in 𝒳 is a degree-ℓ polynomial in 11k variables and has entropy ≥ k-1. By <ref>, the number of such sources is bounded from above by |𝒳|≤ 2^n·11k≤ℓ. Now, <ref> guarantees the existence of a degree-d '-extractor for 𝒴 for k≥(3^ℓ n)^6/d-2ℓ and '=(2d/k^1/4)^d+2^-k≤, where the last inequality uses d≤ k^1/4. When 𝒴 is the family of n^ℓ-size circuit sources, using <ref> and the argument from <ref>, we obtain the family 𝒳 of (11k n^ℓ)-size circuits with 11k inputs and entropy k-1. Moreover, each source from 𝒴 is 2^-k-close to a convex combination of sources from 𝒳. By <ref>, |𝒳|≤ 2^(27k)^2 n^2ℓ. Now <ref> guarantees the existence of a degree-d '-extractor for 𝒴 for k≥ 3n^4(ℓ+1)/d-4 and '=(2d/k^1/4)^d+2^-k≤. §.§ Extractors Outputting Multiple Bits In <ref>, we show how to extend our single-bit extractors for small families of sources to the multi-bit setting, which combined with input-reduction lemma, will extend all our single-bit extractors from <ref> to O(k)-bit extractors. Let 𝒳 be a family of distributions of min-entropy k≥5logn over {0,1}^n for large enough n. Let 𝒴 be a family of distributions each of which is '-close to a convex combination of distributions from 𝒳. Then for every d≥ 6 and t<k, let p_1,…, p_t ∈2n,d be independent and uniformly random polynomials. Then p=(p_1,…, p_t) is a t-extractor for 𝒴 with probability at least 1-|𝒳|· e^3n+t+1-30(k-2t)^d/2/n^2 for =(2d/k^1/4)^d+', assuming ≤ 1/4. Define 𝒳_i to be the family of sources resulting by conditioning sources 𝐗∈𝒳 on (p_1,…, p_i)= (b_1,…, b_i)∈_2^n for any (b_1,…, b_i)∈{0,1}^i, so that 𝒳_0=𝒳, and |𝒳_i|=2^i |𝒳|. Let E_i be the event that p_i is an -extractor for 𝒳_i, and let E_≤ i=E_1∧…∧ E_i. Note that, conditioned on E_≤ i, every source in 𝒳_i+1 consists of ≤ 2^i+1|𝒳| sources of min-entropy ≥ k-2i. This can be shown by an induction. The base case is true since 𝒳_1=𝒳, and 𝒳 has min-entropy ≥ k. For the inductive step, let 𝐗'∈𝒳_i+1 be obtained by conditioning 𝐗∈𝒳_i on p_i=b for some b∈{0,1}. Now by the Bayes rule, we have for every x in the support of 𝐗' [𝐗'=x] = [𝐗=x | p_i(X)=b] = [p_i(𝐗)=b|𝐗=x]·[𝐗=x]/[p_i(𝐗)=b]≤2^-(k-2(i-1))/1/2-ϵ≤ 2^-(k - 2i), as long as ≤1/4, which shows that 𝐗' has min-entropy at least k-2i as desired. Thus, by <ref>, [E_i+1| E_≤ i] ≥ 1-|𝒳_i+1|· e^3n-30(k-2t)^d/2/n^2≥ 1-2^i+1|𝒳|· e^3n-30(k-2t)^d/2/n^2 . Thus by the chain rule, we have [E_1∧⋯∧ E_t] = ∏_i=0^t-1[E_i+1| E_≤i] ≥∏_i=0^t-1(1-2^i+1|𝒳|· e^3n-30(k-2t)^d/2/n^2) ≥ 1- ∑_i=0^t-1(2^i+1|𝒳|· e^3n-30(k-2t)^d/2/n^2) ≥ 1- |𝒳|· e^t+1+3n-30(k-2t)^d/2/n^2 . Next, we analyze the error of the extractor conditioned on the above event E_≤ t. Let 𝐗∈𝒳 be a fixed source. Let (u_1,u_2,…, u_t) be uniformly distributed over {0,1}^t independent of 𝐗 and p_1,…, p_t, and define random variables D_0, …, D_t as follows. For every i, define D_i=(p_1(𝐗), p_2(𝐗),…, p_i(𝐗), u_i+1, …, u_t). Note that D_0 is the uniform distribution and D_t=p(𝐗) is the output of the random degree-d polynomial conditioned on E_≤ t. By the triangle inequality, we have Δ(D_0, D_t) ≤∑_i=0^t-1Δ(D_i, D_i+1). Thus it suffices to bound each Δ(D_i, D_i+1) by ϵ, which can be obtained by noting that conditioning 𝐗 on any value of f_1,…, f_i results in a source that belongs to 𝐗_i for which f_i+1 is an ϵ-extractor. Since Δ(D_i, D_i+1) is a convex combination of the bias of f_i+1 for such fixings, we obtain Δ(D_i, D_i+1)≤ϵ as desired. below is archive For input x∈ S, let random polynomials f_1,…,f_t output y_1,…,t_t, where y_i=f_i(x,y_1,…,y_i-1). Denote S_(b_1,…,b_i) as a set such that S_(b_1,…,b_i)⊆ S and f_j(x,y_1,…,y_j-1)=b_j for all x∈ S_(b_1,…,b_i) and j∈[i]. Denote B_i=(b_1,…,b_i). For simplicity, let α_i=ℓ≤ d2^ℓ-(k-2i)/2, α'_i=6α_i+ε+12α_iε. Let β'_i=12((k+1-2i)ln2+ln1/β)/α_i^2· e^-2ε^2 ℓ≤ d/3. For set S with size 2^k, with probability 1-∑_j=0^i-1β'_j, |S_B_i|≥ 2^k-2i It is proved by induction. Denote S_B_0=S. It is trivial for i=0. Conditioned on [|S_B_i|≥ 2^k-2i]≥ 1-∑_j=0^i-1β'_j, let S'=S_B_i, by <ref>, [|S'_b_i+1|≥ |S'|/4]≥ 1-β'_i. Then this claim is concluded by union bound. For any string b_1,…,b_t, with probability 1-2∑_i=1^tβ'_i-tβ, we have [(y_1,…,y_t)=(b_1,…,b_t)]∈ [∏_i=1^t(1/2-α'_i),∏_i=1^t(1/2+α'_i)]. Let Y_i=(y_1,…,y_i) and B_i=(b_1,…,b_i). For each f_i, where i∈[t], there exists a polynomial f'_i with degree at most id, such that f_i(x,Y_i-1)=f'_i(x). Conditioned on f_1,…,f_t has error less than α'_1,…,α_t', and based on <ref> and <ref>, with probability 1-2∑_i=1^tβ'_i-tβ, we have [ [Y_t=B_t] =[Y_t-1=B_t-1]·[y_t=b_t|Y_t-1=B_t-1]; =[Y_t-1=B_t-1]·[f_t(x,B_t-1)=b_t|x∈ S_B_t-1]; ∈[Y_t-1=B_t-1]·[1/2-α',1/2+α']; ∈ [∏_i=1^t(1/2-α'_i),∏_i=1^t(1/2+α'_i)] ] If for any string b_1,…,b_t, we have [(y_1,…,y_t)=(b_1,…,b_t)]∈ [∏_i=1^t(1/2-α'_i),∏_i=1^t(1/2+α'_i)], then SD(Y,U)≤ 2tα'_t. [ SD(Y,U) =1/2·∑_B_t∈{0,1}^t|[Y_t=B_t]-2^-t|; ≤1/2· ((1/2+α'_t)^t-1/2^t)· 2^t; =1/2·((1+2α'_t)^t-1); ≤ 1/2· (e^2tα'_t-1); ≤ 2tα'_t ] alpha
http://arxiv.org/abs/2405.10250v1
20240516165506
IntelliExplain: Enhancing Interactive Code Generation through Natural Language Explanations for Non-Professional Programmers
[ "Hao Yan", "Thomas D. Latoza", "Ziyu Yao" ]
cs.HC
[ "cs.HC" ]
: Enhancing Interactive Code Gen through NL Explanations for Non-Professional Programmers]: Enhancing Interactive Code Generation through Natural Language Explanations for Non-Professional Programmers George Mason University Fairfax VA USA 22030 hyan5@gmu.edu George Mason University Fairfax VA USA 22030 tlatoza@gmu.edu George Mason University Fairfax VA USA 22030 ziyuyao@gmu.edu Large language models (LLMs) have exhibited a strong promise in automatically generating executable code from natural language descriptions, particularly with interactive features that allow users to engage in the code-generation process by instructing the LLM with iterative feedback. However, existing interaction paradigms often assume that users have expert knowledge to debug source code and are not optimized for non-professional programmers' use. This raises challenges in making interactive code generation more accessible for individuals with varying levels of programming expertise. To tackle these challenges, we present , which offers a novel human-LLM interaction paradigm to enhance non-professional programmers' experience by enabling them to interact with source code via natural language explanations. Users interact with by providing natural language corrective feedback on errors they identify from the explanations. Feedback is used by the system to revise the code, until the user is satisfied with explanations by the system of the code. Our user study demonstrates that users with achieve a significantly higher success rate 11.6% and 25.3% better than with vanilla GPT-3.5, while also requiring 39.0% and 15.6% less time in Text-to-SQL and Python code generation tasks, respectively. <ccs2012> <concept> <concept_id>10003120.10003121.10003124.10010870</concept_id> <concept_desc>Human-centered computing Natural language interfaces</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003123.10010860.10010858</concept_id> <concept_desc>Human-centered computing User interface design</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Human-centered computing Natural language interfaces [500]Human-centered computing User interface design < g r a p h i c s > enables non-professional programmers to write code in natural language without requiring direct interaction with code. The user starts with a question in natural language (NL), accompanied by relevant context (top). then generates source code and confirms its understanding of the question by presenting an NL explanation (in ) to the user. When this understanding is incorrect, the user can provide corrective feedback in NL and instruct the system for error correction. 20 February 2007 [revised]12 March 2009 [accepted]5 June 2009 [ Ziyu Yao ============ § INTRODUCTION The field of AI-powered code generation has witnessed a significant paradigm shift with the emergence of Large Language Models (LLMs) such as Codex <cit.>, GPT-3.5&4 <cit.>, Code Llama <cit.>, StarCoder <cit.>, and CodeT5 <cit.>. Unlike prior approaches that required training task-specific code generation models and often involved labor-intensive data collection and annotation efforts, LLMs can learn directly from a few shots of task demonstrations fed in their prompt context (called “few-shot, in-context learning”) <cit.>. Additionally, they seamlessly interpret and generate code based on contextual descriptions provided in natural language (NL), offering the potential for dramatically improving the efficiency and accessibility of code generation tasks. This proficiency is further amplified through their instruction-following features <cit.>, allowing users to actively participate in the decision-making process by providing guidance across multiple turns. As users engage in the multi-turn interaction with LLMs, LLMs can iteratively refine the generated code and ensure it aligns more closely with users' intentions and requirements. As such, LLMs have shown promise in mitigating the challenges associated with learning unfamiliar programming languages, reducing development time, and offering real-time assistance to programmers. However, how to leverage LLMs' interactive features in assisting non-professional programmers to write code remains a challenge. Non-professional programmers are individuals who have basic knowledge of computation (e.g. mathematical operation, linear algebra, etc.) but much less than a computer science major or professional engineer. While they may not have or have only limited introductory programming experience, they recognize the potential of programming to enhance productivity in their own work. Prior work has dominantly focused on the human-LLM interaction for experienced programmers. Barke et al. <cit.> categorized user interactions with GitHub Copilot <cit.> into acceleration or exploration modes based on user behaviors on how to seek help from Copilot. Vaithilingam et al. <cit.> systematically explored design principles of the user interface for inline code suggestions. However, their explorations in this interaction paradigm presuppose that the user possesses sufficient programming experience to comprehend the model-generated technical content and debug source code themselves. In addition to code auto-completion, an even more straightforward interaction paradigm is for users to conversationally request code solutions from LLMs. For example, a user can typically interact with ChatGPT <cit.> by posing programming questions and optionally providing input-output samples to specify requirements. When errors are identified or the generated code fails to meet the specified criteria, users often follow up with feedback prompting ChatGPT to refine the code solution. This process can run iteratively until the users obtain a desired solution. Despite its simplicity, the difficulty in accurately pinpointing and articulating errors in a generated code makes it challenging for users to provide meaningful corrective feedback. To address the problem, prior research has conducted extensive exploration on the formats of user feedback <cit.>, but there has not been a validated solution particularly for non-professional programmers. In this work, we present a novel and effective interaction system, dubbed , which is designed to assist non-professional programmers in writing and debugging code. Specifically, is built upon a novel human-LLM interaction paradigm, where the LLM explains its generated source code in plain language, prompts users to identify problems and provide NL feedback based on the explanation, and then refines the code solution based on the user feedback. As such, non-professional programmers can easily write code using NL, without needing professional knowledge about programming or directly interacting with the source code (Figure <ref>). The key insight of lies in the use of NL explanation which offers a more accessible version of the source code. It presents to users the logic and the reasoning process for the LLM to solve the given problem. One recent work relevant to us is that of Chen et al. <cit.>, which utilizes NL explanations generated by an LLM to “self debug” its code generation. However, their explanations often 1) tend to be overly lengthy, potentially causing disinterest among users, and 2) contain technical terminologies, which are not understandable by non-professional programmers. To address these shortcomings, we first design prompts that convert the model-generated code into a more concise, easily understandable, yet logically precise NL explanation for non-professional programmers. Subsequently, we also devise prompts for LLMs to effectively refine their code answer based on user feedback. To assess the effectiveness of , we conducted a user study involving 20 non-professional programmers. Participants were assigned two coding tasks: SQL programming, based on the Spider dataset <cit.>, and Python programming, based on the MBPP dataset <cit.>. We evaluate whether can assist these non-professional programmers in writing code correctly. Results from our user study indicate that participants using achieved significantly higher success rates in writing correct code compared to those relying on vanilla ChatGPT <cit.>. Participants who did not use faced challenges in effectively interacting with ChatGPT and spent more time on average attempting to complete the tasks. Even participants with no prior programming experience were able to write and debug code solely by relying on the designed NL explanation. To summarize: * We introduce , a programming-assistive system based on a novel interaction paradigm that incorporates NL explanation and feedback for interactive code generation. is particularly designed for non-professional programmers and can be applied to different LLMs across varying coding tasks. * We design a concise and straightforward NL explanation for code generation to aid non-professional programmers in understanding and debugging source code without directly interacting with the source code. * Our user study with 20 non-professional programmers demonstrates that can assist non-professional programmers, even those with no prior programming experience, in more effectively writing and debugging code. * We include an in-depth analysis of the user study results, particularly discussing the promise and challenges for future researchers pursuing the research of interactive code generation for non-professional programmers. The source code and data of our project will be released at <https://hyan5.github.io/IntelliExplain/>. § MOTIVATING EXAMPLE Amid a heated election season, the campaign headquarters of a determined candidate buzzed with activity. Emily, a strategist with no programming background, works hard to understand the election situation and plan their next steps. She recognizes the crucial importance of securing support from the area with the highest number of voters. She wants to retrieve the information from a public database. However, she does not have prior experience in writing SQL. Emily initially turns to the widely-used AI-powered code generation model, GPT-3.5 (ChatGPT). Unfortunately, despite her efforts, her interactions with GPT-3.5 did not yield the correct answers as the code returned nothing by executing it. Without the ability to understand the generated code, responses generated by GPT-3.5 contain too many technical terms, which makes it hard for her to find errors inside the code or confirm the correctness of the code. Even though she requested the model to explain its code, the explanations were long and not easily understandable by her. In the meanwhile, she notices and decides to use it to generate the SQL. When Emily opens , a login page is displayed to facilitate the storage and retrieval of personal usage history (Figure <ref> A). After successful login to , the user interface appears and contains two tabs for different code generation tasks (Figure <ref> B). Emily selects text-to-SQL as she wants to write an SQL query. She downloaded the SQL file from the official website and uploaded it to . She then entered the question “What is the area code in which the most voters voted?”. processes the SQL file and shows an overview of all tables and three sample records for each table (Figure <ref> C). , as shown in Figure <ref>, first predicts an initial code in the backend and converts it to an understandable explanation. After that, presents the explanation to Emily in the interface and requests validation on its understanding of the input question. Emily checked the explanation and found she wanted the “area code” but in the explanation, understands it as “state”. She then provided feedback “Return an area code, not the state.” In the second round, tried to refine its answer based on the feedback from Emily and generated a different code and explanation. Emily confirmed the understanding was correct. then provided the source code for the latest one as output. § RELATED WORK §.§ Automatic Code Generation Automatic code generation aims to automate the process of creating executable code from high-level specifications, natural language descriptions, input-output examples, or partial implementations. Within this landscape, our interest lies in code generation models, particularly those harnessing the power of LLMs. These models autonomously grasp program concepts from vast code repositories without being constrained by specific languages or task domains. Recently, the application of LLMs as programming assistants has gained substantial interest in both Natural Language Processing (NLP) and Human-Computer Interaction (HCI) communities. McNutt et al. <cit.> delved into the integration of LLM-based code assistants within notebooks, exploring their adaptability and potential impact. <cit.> explored the impact of input content and model parameters on the overall performance of the final output code. Despite their strong capabilities in code generation, these LLM-powered applications <cit.> often assume users possess specific knowledge for effective code generation, posing challenges for non-professional programmers. Recognizing these limitations, our work is dedicated to designing an interaction paradigm that is not only more effective but also easier to use, aiming to better assist non-professional programmers in their coding tasks. §.§ Interactive Code Generation The research of interactive code generation has become an important topic even before the popularization of LLMs <cit.>. An interactive code generation system typically consists of three components: a code generation model, a mechanism to identify and request user feedback on errors in the predicted code, and an error correction model to refine the code based on user feedback. Previous studies looked into interactive code generation through a method called programming-by-examples (PBE), where users define how a program should behave by giving input-output examples. Zhang et al. <cit.> offered methods to augment input-output examples and help refine the user’s true intent. Drosos et al. <cit.> proposed Wrex which creates a programming-by-example environment within a computational notebook. Lahiri et al. <cit.> tried to generate test suggestions that align with human intent. However, even proficient programmers can not provide representative examples that cover as many as possible practical situations. Consequently, while the provided or augmented examples may match the expected output, they may deviate from the user's intended behavior when subjected to unseen cases. In addition to interaction through examples, prior works have utilized multiple-choice feedback mechanisms. Studies conducted by <cit.> and <cit.> approached this by explaining components in a generated SQL code, and if the logic is wrong, users were prompted to select the correct components as a form of feedback. Another approach, proposed by <cit.>, identified uncertain tokens in the user's NL commands and sought user choices for paraphrases to enhance clarity. However, the traditional multi-choice feedback paradigm, while effective in error correction for text-to-SQL generation, exhibited limitations in terms of user-friendliness, efficiency, and generalizability to more complex programming languages. In particular, users could only passively respond to system-presented choices, posing challenges in facilitating a more dynamic and user-centric interaction. To address this, free-form NL feedback has been introduced <cit.>. <cit.> demonstrated the effectiveness and promise in correcting parsing errors via NL feedback and annotated SPLASH dataset. <cit.> proposed an algorithm for improving a code generation model from NL feedback. However, those methods require training or fine-tuning additional models, which becomes impractical with LLMs having an extremely large amount of parameters. As LLMs enhance their ability to follow instructions, interactive code generation has gained increased attention and is being explored extensively in current research efforts <cit.>. Despite these advancements, a remaining challenge is how to seamlessly integrate code generation models into interfaces, make them understandable, and give users a sense of control. This challenge prompts the need for further exploration and refinement to enhance the overall user experience in interactive code generation. <cit.> demonstrated the efficacy of a conversational programming assistant powered by an advanced LLM in answering queries, generating context-specific code, and facilitating follow-up questions. <cit.> proposed GenLine, an IDE-based LLM-powered code assistant for web development, leveraging NL and command-like human-LLM interaction. However, it is important to note that these tools are primarily designed for experienced programmers and may pose challenges for non-professional programmers, especially those unfamiliar with the IDE environment. In our work, we tackle this issue by introducing NL explanation aimed at providing a simple and easy way for non-professional programmers to interact with code generation models powered by LLMs. This is a step forward in closing the gap for new programmers, making interactive code generation more accessible and user-friendly. §.§ LLM Interpretability and Explanability With the use of NL explanations, our work is also related to existing efforts in interpreting and explaining LLMs. Among others, <cit.> presented one of the first works on post-hoc explanations of LLMs for code generation. <cit.> explored the integration of code explanations generated by LLMs into an interactive e-book on web software development and assessed student engagement and utility across different explanation types. <cit.> examined the abilities of LLMs in producing programming exercises and code explanations, finding that the majority of generated content is both novel and coherent, with potential applications in educational settings. <cit.> evaluated how different types of explanations, instructions, and controls affect zero- and few-shot performance of LLMs. <cit.> and <cit.> demonstrated that the generation of reasoning steps contributes to the production of accurate final answers. <cit.> explored the self-debug ability of LLMs when prompting the models to generate NL explanations to their own predictions. However, none of the prior works explored the possibility of using succinct explanations in interactive code generation for non-professional programmers. § INTERACTIVE CODE GENERATION FOR NON-PROFESSIONAL PROGRAMMERS BASED ON NL EXPLANATIONS §.§ Design Goals We aim to design an effective and easy-to-use interaction paradigm for non-professional programmers. In this paper, we use the term non-professional programmers to encompass both novice and end-user programmers. Novice programmers are individuals who are new to programming and have limited experience or knowledge in writing code. In contrast, end-user programmers are individuals who may not have formal training in computer science or software engineering but use programming interfaces to automate tasks, develop scripts, or modify existing software applications for personal or professional use. Our concept of non-professional programmers bridges these categories, addressing the needs of both those new to programming and those seeking to utilize programming tools for specific tasks without extensive expertise. With this definition in mind, we propose a novel interaction paradigm, where the LLM-powered system repeatedly explains its generated code solution, requests user feedback on the explanation, and refines its generation based on the feedback, until the user is satisfied with the code solution. Our approach to designing such an interaction paradigm for non-professional programmers was shaped by several key design goals. (1) Enable non-professional programmers to understand the model-generated code and identify errors without directly interacting with the code. This means making explanations simple, avoiding technical jargon, and maintaining clarity and conciseness. Beyond understanding the model-generated code, it is essential that users can effectively provide feedback on any errors identified within the explanation. This requires the design of explanations capable of accurately capturing errors in the source code. To achieve this, the NL explanations should clarify the code's functionality and be structured logically to guide users through the thought process behind the generated code. By presenting the information logically, users can easily understand the reasoning behind each code element, thereby boosting their confidence and capability to engage with the system. This iterative feedback loop, where users can readily identify and articulate issues within the explanation, is crucial for refining the model's output and improving the overall effectiveness of the interaction paradigm. This goal raises the research questions (RQ): * RQ1: Can the designed NL explanation accurately describe the source code? * RQ2: Can users provide effective feedback based on the NL explanation? (2) Incorporate user feedback for error correction. The interaction paradigm must seamlessly integrate user feedback to refine the generated code. In our design, users interact with explanations without seeing the source code. This thus leads to a potential discrepancy, i.e., the user feedback targets errors presented in the explanation, rather than directly the source code. It results in a research question: * RQ3: Can user feedback based on the explanation be successfully applied to the source code for error correction? In the remaining section, we will first give details of our explanation in Section <ref> and then present our designed interaction paradigm in Section <ref>. §.§ Natural Language Explanation The most straightforward prompt is to ask the LLM to explain their prediction. However, this vanilla approach often results in explanations that are lengthy and too technical to be read by non-professional programmers. To address this limitation, we propose two distinct styles for program explanations: §.§.§ Question Restatement from Source Code In our preliminary experiments, we observed that a significant portion of LLM errors in text-to-SQL stemmed from a misunderstanding of concepts within the original question. However, such mistakes can hardly be captured from a vanilla explanation of the SQL code, which is often full of technical jargon distracting users from identifying concepts involved in the code (Figure <ref>, right). Observing this challenge, we instead propose to use “restated question” from the source code as an explanation for text-to-SQL programming (Table <ref>). A restated question is an NL question generated by the LLM to describe the intent of a model-generated code. Prior work <cit.> found that, by prompting users to compare the restated question with the original one, they can easily spot any mismatched concepts. To non-professional programmers, such explanations are very concise and do not involve any technical terminologies, rendering them not only effective but also easy to understand. Our investigation revealed that error identification is more straightforward when two NL questions share a similar linguistic structure. To align the restated question with the style of the source question, we additionally include the user's initial question in the prompt and explicitly instruct the LLM to produce a restated question following a similar language style. Together, these design considerations result in our prompt for question restatement from the source code in Table <ref>. Specifically, we prompt the LLM to generate the restated question given the source code along with task instruction “Translate the following SQL into question. The question should be consistent with the SQL and follow a similar style as the original question.” To prompt an LLM to generate restated questions following this design, we manually wrote 13 triplets of <SQL, Original Question, Restated Question> as few-shot demonstrations. The SQL queries were selected to encompass a broad range of syntax that may appear in the given programming language, such as keywords “SELECT”, “WHERE”, “DISTINCT”, etc. §.§.§ Concise Description of Source Code In our exploration, we observed that question restatement proves to be more effective for short code snippets, like SQL queries, and can effectively address conceptual misunderstanding errors. However, it cannot capture the inner logical errors in scenarios involving lengthy generated code and intricate coding tasks, especially when the LLM makes an inaccurate generation despite correctly understanding the input question. This raises a need for a more fine-grained exploration of the inner logic. We achieve this by proposing a concise explanation of the source code, striking a balance between the succinct question restatement and the technical and lengthy line-by-line explanation. To this end, we first randomly select 8 examples and annotate each with a human-written description. This description consists of a brief introduction summarizing the predicted code at an abstract level. Following this introduction is a detailed breakdown of the logic behind the code, illustrating how LLM solves the question. In contrast to a line-by-line explanation, our concise description offers a more readable and relatively shorter presentation of the code while ensuring the inclusion of crucial details. The annotated examples then serve as the few-shot demonstrations when we prompt an LLM to generate the explanation. A sample prompt is shown in Table <ref>. §.§ Interaction Paradigm To utilize our designed explanation for assisting non-expert programmers in coding tasks, we introduce an interaction paradigm as shown in Figure <ref>. We included sample prompt templates for each stage used in our paradigm in Appendix <ref>. §.§.§ Code Generation The user first asks a coding question and provides related context (e.g., database schema for text-to-SQL and test cases for Python code generation) for code generation through . The backend LLM then tries to generate an initial answer code. This initial code generation is achieved via few-shot in-context learning, where we feed the LLM with a few examples demonstrating the code generation task. The specific prompts we adopt are adapted from that of <cit.> for text-to-SQL and <cit.> for Python code generation. §.§.§ Explanation Generation After the initial code is generated, the backend LLM then generates the NL explanation to explain the code. As introduced in Section <ref>, we adopt question restatement as explanations for text-to-SQL and concise description as explanations for Python code generation. §.§.§ User Feedback Request then presents the NL explanation and the execution results by running the generated code to the user, and seeks feedback on whether their question is correctly addressed. We employ different strategies for text-to-SQL and Python code generation. In text-to-SQL, we request user feedback by asking users “Here is what I understand based on your question: [restated_question]. If you think that my understanding is correct, you can mark this question as “complete”. Current prediction will be saved as final answer.” For Python code generation, we integrate an external Python code interpreter to evaluate whether the generated code passes all tests. If the model-generated code fails any tests, users will be prompted with, “The code I generated did not pass all test cases. Could you identify what is wrong with it? Here is a description of the code: [code_description]. The execution results are displayed on the top right.” Following the instructions, users can then write their feedback in NL, pointing out errors or identifying any missing requirements found in the explanation. §.§.§ Error Correction After receiving the user feedback, the backend LLM performs error correction by taking the initial code, its NL explanation, and the user feedback as input, and then generating a new source code as the corrected solution. This correction task is similarly formulated as few-shot in-context learning, for which we include 4 human-annotated error correction examples as demonstrations in the prompt to guide the LLM on how to perform this task. After this error correction, the backend LLM generates a new explanation for the revised code solution. seeks further user feedback, and repeats the process until the user cannot find more issues with the code solution. § USER STUDY We evaluate by conducting a user study involving 20 participants. The purpose of the study was to gain insight into how non-professional programmers perceive and interact with an NL explanation of code in programming and debugging tasks. §.§ Participants To recruit non-professional programmers, we targeted undergraduate students from a range of majors through recruitment flyers and advertising emails. Each applicant completed a demographic survey with items on their programming background and experience level. We specifically selected participants who were beginners in programming, such as first-year computer science students who had only completed an introductory programming course with minimal practical experience, as well as individuals from non-computer science majors, who had no prior programming experience but were familiar with basic mathematical logic and could benefit from programming in their respective works. From 50 applicants, we recruited 22 participants in total. 20 participants completed the study. Of the two who did not, one participant encountered difficulties during the warm-up tasks, struggling to understand the basic concepts described in the materials. As a result, the participant opted to withdraw from the study. A second participant could not understand the basic concepts of database querying and only completed the test questions for Python code generation. For the remaining 20 participants who completed the full user study, none of them had significant experience with both programming languages. 18 out of 20 participants reported that they did not have any experience in database and SQL queries, while the remaining 2 took database classes at university but had no practical experience. 1 out of 20 participants reported that they had no prior experience in Python, while the remaining 19 had taken an introductory Python class. Additionally, 3 out of 20 participants reported that they had attempted to use ChatGPT <cit.> for coding. §.§ Setup §.§.§ Backend LLM and Testing Question Selection We use GPT-3.5 (version: turbo-0613) as our backend LLM and include 10 questions for each task from the Spider <cit.> and the MBPP <cit.> dataset, respectively, where GPT-3.5 demonstrated errors in its initial code generation. For the user study results to be reliable across programming questions with various complexities, we chose the questions carefully based on their difficulty levels. For text-to-SQL, we follow the same criteria proposed in Spider <cit.> and define the question difficulty based on the number of clauses and conditions involved in the ground-truth SQL query (e.g., queries that contain more SQL keywords are considered to be harder). For Python code generation, we define the difficulty level by the syntax complexity and readability (e.g., a longer code containing nested function calls is considered harder). Specifically, we randomly sampled 20 questions for each task, manually assessed their difficulty level, and then retained 10 of them that were evenly distributed across different difficulty levels. Finally, participants were provided with the same set of test questions for each task for consistency. §.§.§ Baseline System For comparison, we consider vanilla GPT-3.5 as the baseline. For a fair comparison, we prompted the baseline system with the same few-shot examples in initial code generation, which resulted in model-generated code containing exactly the same errors as 's in the first round. Then, users were free to interact with GPT-3.5, and no additional prompts were employed in the following interactions. The baseline system shared the same UI and test questions as . The only difference is that we removed the execution results (component B in Figure <ref>) from the UI of vanilla GPT-3.5 since it is hard to extract pure code from its generation. Participants were then randomly split into two groups (10 for each), interacting with either or the vanilla GPT-3.5. §.§ User Interface We designed a User Interface (UI) (Figure <ref>) for the proposed interaction paradigm with Gradio[https://www.gradio.apphttps://www.gradio.app.] and used it to conduct the user study. In text-to-SQL, the UI includes a database, containing all tables and their attributes along with three sample records; execution results, showing the return values by running the predicted code against the database; and Chatbot, showing the conversation history between the user and LLM. For Python code generation, our UI includes a panel showing the test cases and their expected outputs. The execution results show the values returned by running the predicted code against the provided test cases. In both interfaces, a text box is provided as an entry point for users to interact with the LLM. A “Complete” button is provided if the user thinks the current prediction is correct and wants to proceed to the next question. Two “Skip” buttons are provided for cases where participants struggle to understand the given question or when the LLM fails to generate the correct answer after a series of interactions. In the baseline UI with vanilla GPT-3.5, execution results are omitted. This is because GPT-3.5 does not consistently provide pure code as a response. As a result, it becomes challenging to distinguish the code from its responses and execute it. Currently, the UI primarily serves the purpose of conducting user studies. To ensure consistency and comparability across all participants, users are restricted from posing their own coding questions. Instead, all questions are predetermined, and users are not required to provide any relevant context themselves. In practical use, can be extended to accept arbitrary user input and offers a function button for users to upload files relevant to their specific needs. §.§ Study Procedure The user study comprised three phases: warm-up tasks, formal study, and a post-task interview. Recognizing that all participants entered the study with limited experience in the specified tasks and were unfamiliar with our UI, we initiated the study with a warm-up session. An experimenter provided an overview of the two tasks and introduced participants to the various functionalities embedded within our UI. Participants then actively engaged with the UI, tackling two warm-up questions for each task to foster familiarity and proficiency. Throughout this warm-up session, participants were encouraged to pose any questions related to the tasks or coding process, fostering a collaborative and informative environment. The warm-up session was essential for getting participants familiar with the tasks and helping them become comfortable using the UI. Participants could ask questions about the tasks or coding during the study, providing them with more clarity. A fail-safe was included, so if participants found the training questions challenging or faced issues with the UI, they were guided to end their participation. Upon successful completion of the warm-up tasks, qualified participants then proceeded to the formal study, where they were tasked with independently solving a set of challenging test questions. During this evaluative phase, the experimenter assumed a passive role, intervening solely to address clarification questions or resolve any technical issues encountered by the participants. This intentional shift allowed for an authentic assessment of the system. To encourage an efficient completion of the user study and to prevent participant fatigue, we set a 5-minute time limit for each question. Participants were instructed to skip any question they could not solve within this time frame. After the thorough formal study, participants took part in a reflective interview to share their thoughts. This interview provided the experimenter with a chance to explore participants' experiences and gather detailed feedback on using the system. The interview questions covered various aspects, from overall experiences to specific components that assisted them in solving coding tasks or challenges they encountered during the formal study, giving participants a platform to express their thoughts fully. By employing this careful three-phase approach, our user study aimed to objectively evaluate the performance of , collect qualitative insights regarding user experiences, and facilitate a comprehensive exploration of potential challenges. § RESULTS §.§ Evaluation For evaluation, we report the Success Rate of whether user can obtain the correct answer code using each system on both tasks. For text-to-SQL, we adopt the official execution accuracy metric from Spider <cit.>, which compares the execution results between the LLM-generated code and the ground-truth code against the database. For Python code generation, we execute the generated code against the given test cases to see whether the code can pass all tests. In addition, we also report the average time spent per question (denoted as “Avg. Time/Question”) to measure how efficient of users using each system in coding tasks. For each reported metric, we report the mean and the standard deviation. An independent samples t-Test with an alpha level of 0.05 was used to determine whether there was statistical evidence that the associated population means between the two conditions were significantly different. In this section, we first present an overview of the overall performance achieved using our designed interaction paradigm with designed NL explanation and feedback from the user study. Then, we look into each part more closely for a detailed analysis. §.§ Overall Performance Table <ref> presents the average success rate of our testing questions across all participants utilizing two systems. The results (as well as our follow-up analysis) have excluded test samples where the participants indicated difficulty in understanding the initial question and hence skipped it (“Cannot understand the question”). In text-to-SQL, 6 such test samples across 4 distinct questions were skipped out of the 100 total samples (10 participants by 10 test questions); in Python code generation, 15 samples across 7 distinct questions were similarly skipped. In text-to-SQL, 6 test samples across 4 distinct questions were skipped out of the 100 total samples (10 participants by 10 test questions). In Python code generation, 15 samples across 7 distinct questions were skipped out of the 100 samples. For remaining questions, we kept them all in our analysis even for those skipped questions due to “Cannot get the correct answer”. Even users thought could not correct the errors based on their feedback, they identified errors in the explanation and tried to correct them. enables users to achieve success rates 11.6% and 25.3% higher than the vanilla GPT-3.5 group in text-to-SQL and Python code generation, respectively. The t-Test showed that the means between the two groups are statistically significant in success rate (SQL: t=1.935, p=0.043; Python: t=2.361, p=0.021) and time spent on each question for text-to-SQL (SQL: t=-2.611, p=0.014), but no difference in time spent per question for Python code generation (Python: t=-1.374, p=0.101). This observation could be attributed to the more complex task and more fine-grained explanations in Python code generation. Participants from both groups needed more time to navigate and comprehend the task. This substantial performance improvement serves as a quantitative validation of the effectiveness of and aligns seamlessly with the qualitative feedback gathered during post-task interviews: “This system provided me with an amazing experience that I had never had before”; “Your system is really helpful for me in programming. I'm not very skilled at programming, but with your system, I find it easy to understand and write code using natural language.”; “Your system is particularly useful because of its natural language explanations, allowing me to understand and debug code without needing to inspect the source code directly”, etc. Participants consistently reported a more satisfactory experience when using , highlighting its effectiveness in enhancing the overall user-LLM interaction in code generation. Moreover, the advantages of extend beyond mere success rates. In addition to the improved success rate, also reduces users' time by 60 seconds per question in text-to-SQL and 25 seconds per question in Python code generation. This advantage shows that our designed interaction paradigm is not only improving the model's performance but is also more practical and efficient. We then look into what exactly makes work better and quicker. We attached one example from our user study for Python code generation in Figure <ref> (an example of SQL code generation has been presented in Figure <ref>). In this example, the participant successfully composed the correct Python code using just 2 interactions with , whereas users relying on the baseline GPT-3.5 required significantly more interactions to understand the generated code and validate whether it is correct. Finally, participants using vanilla GPT-3.5 failed to identify errors in the generated code, which resulted in an incorrect answer without the user's awareness. Moreover, we noticed that in the first turn of interaction, did not make corrections accordingly based on user feedback. This reveals room for future improvement of LLMs in incorporating human feedback with diverse styles. In the second round, LLM successfully refined its answer guided by the rephrased feedback. Furthermore, the post-task interview allows us to gather more comprehensive feedback from our participants, reaffirming the significant role our proposed explanations play in assisting participants throughout the code generation process. In fact, an overwhelming majority of participants, a notable 9 out of 10 participants who are using in the user study, expressed their appreciation for the explanations, highlighting that it significantly contributes to their understanding of the source code. These participants specifically noted a preference for explanations over the raw code itself, indicating the explanatory content's perceived value. On the other hand, one participant offered feedback that “Occasionally the explanations fell short, particularly when the logic within them was unclear.” This feedback underscores the need for future improvement in ensuring the clarity and effectiveness of the explanations and addressing any potential challenges that participants might encounter in comprehending the generated code. All participants who used vanilla GPT-3.5 expressed uncertainty about debugging the code through the explanation generated by vanilla GPT-3.5. In subsequent sections, we present further insights surrounding the research questions (RQs) we outlined in Section <ref>, and then discuss a comparison with the vanilla GPT-3.5 in Section <ref>. §.§ RQ1: Can the Designed NL Explanation Accurately Describe the Source Code? To illustrate the effectiveness of our proposed NL explanation, determining its accuracy in describing the model-generated code and its ability to capture errors that exist in the source code, we manually examined all explanations of generated code for every test question. As depicted in Table <ref>, we found that our explanations generally align precisely with the generated code; however, they do not always help find code errors. This limitation arises from the challenge of encapsulating intricate inner logic into concise explanations. This highlights the challenge of striking a balance between presenting concise and easy-to-understand NL explanations and presenting more fine-grained inner logic of the code. To better understand this phenomenon, we performed a systematic investigation independent of the user study. Specifically, we run GPT-3.5 to generate the answer code for the entire Spider-dev set (1,034 test examples) and the MBPP-test set (500 test examples). Eventually, we collected 214 error predictions for text-to-SQL from Spider-dev and 140 error predictions for Python code generation from MBPP-test. Then, we prompted GPT-3.5 to generate the explanation for these predictions following our method in Section <ref>. For each task, we randomly selected 30 cases and manually inspected their quality. We found that only one explanation was inconsistent with the source code in text-to-SQL and all explanations precisely describe the Python code. Among those precise explanations, 51.7% for text-to-SQL and 66.7% for Python code generation allows for easily capturing code errors. The results thus confirmed our observation from the small samples in the user study. We attached one example for each task to show the preciseness of our designed explanation in Table <ref>. In practice, participants were able to recognize code errors from the NL explanations and had at least one turn of interaction with for 53% of the time in text-to-SQL and 83% of the time in Python code generation. This indicates that participants can understand the explanations and potentially locate errors in the explanations. In the post-task interview, we gathered feedback from all participants using specifically focusing on their experiences with our designed NL explanation. For text-to-SQL, all participants thought that the explanation could help them understand the meaning and logic behind the source code of SQL queries. We noticed two challenging questions that most of the participants could not realize any errors from the explanations. A subsequent manual investigation pointed to the inherent ambiguity within the original questions, which was likely due to the original questions being unclear and lacking specific conditions such as requesting sorted order without specifying how to sort it (descending or ascending). For Python code generation, most participants appreciated the explanation for aiding in understanding and debugging the source code. However, one participant reported the explanation was less useful and asked if we could provide both the explanation and source code. We inquired about the reason and collected “The explanation does help, but it lacks step-by-step logic on how the source code performs. From the explanation, I thought the code was correct, but it could not pass all tests.” This feedback suggests that, while our NL explanations were generally effective, there were some variations in user preferences and the need for improving the explanation to include more fine-grained logic. This insight suggests that future work should tailor the NL explanation to be more “personalized” for meeting individual users' needs. §.§ RQ2: Can Users Provide Effective Feedback based on the NL Explanation? Throughout the user study, we observed a variety of feedback from participants using , which can be broadly classified into three categories as shown in Table <ref>: * Instructions for Error Correction. Users can spot errors in the explanation and suggest how to fix them. We found that 57.5% of text-to-SQL users and 71.0% of Python code-generation users gave this feedback type. This also implies the effectiveness of our explanations in aiding non-professional programmers in code understanding and debugging. * Question Rephrasing. This type of feedback suggests that users perceive errors in the explanation, attributing them to the underspecified intent of the original question. In our observations, participants are more likely (39.2%) to provide this feedback type in text-to-SQL compared to Python code generation (2.1%). This discrepancy is influenced by the distinct explanation methods used in these tasks. In text-to-SQL, the restated question motivated participants to compare the intents presented in their initial question and the restated one; when they identified inconsistency, they might naturally think about providing a complete question with clearer intent. In addition, there was no clear pattern indicating whether participants chose to provide this feedback based on their experience or the question difficulty, as most participants lacked experience with SQL, and this feedback occurred evenly across all difficulty levels. Conversely, in Python code generation, participants interact with concise descriptions of the source code, which include more logic behind the source code and result in less possibility of rephrasing the original question. * Step-by-Step Instruction. Users offer detailed step-by-step instructions to guide the model in solving the problem based on their understanding. We observed that Python code generation participants (26.9%) are more inclined to provide this feedback type compared to text-to-SQL participants (3.3%). This is likely caused by both the distinct explanation methods and the task complexities. Moreover, users tended to provide this feedback if they felt confident in solving the question themselves, particularly among users who had taken introductory programming courses. Overall, regardless of the different types of feedback received from the user study, it is evident that participants recognized errors in our explanations and tried to provide guidance for error correction. The substantial proportion of feedback pinpointing errors in the explanation indicates the preciseness and utility of our explanations in describing the source code in practical scenarios. Finally, an intriguing question is, when users provided feedback, did they benefit more from the NL explanation or the code's execution results? With the inclusion of execution results in our UI, participants have the option to provide feedback not only based on the explanations but also on the actual execution results. This introduces a valuable avenue for assessing the effectiveness of our explanations. During the post-task interview, participants are queried about the number of instances in which they chose to provide feedback primarily influenced by the observed execution results rather than relying solely on the explanations. All 10 participants said they only use the explanations to fix errors in text-to-SQL. Unlike text-to-SQL, test cases and expected outputs are provided in Python code generation. Thus, 7 out of 10 participants started by looking at the execution results. If the results were wrong, they knew there might be errors in the explanation, so they spent more time checking it. The other 3 participants indicated that they began by reviewing the explanation. If they identified errors, they immediately provided feedback without considering the execution results. This demonstrates the indispensable role of our explanation. §.§ RQ3: Can User Feedback based on the Explanation be Successfully Applied to the Source Code for Error Correction? In Table <ref>, the success rates of different feedback types illustrate 's efficacy in integrating human feedback for error correction. Specifically, achieved a success rate of 36.0% for text-to-SQL and 65.6% for Python code generation when users provided feedback of type “Instructions for Error Correction”. This indicates a reasonable level of success in addressing user-provided feedback, given that this type of feedback was the most commonly provided by participants in both tasks. On the other hand, it showcases the practical utility and responsiveness of our designed NL explanation and interaction paradigm. The 25.9% success rate in “Question Rephrasing” feedback for text-to-SQL also demonstrates the effectiveness of in incorporating human feedback. People typically elaborate their questions to use more precise words and include a little more details. With the rephrased question, can generate code that is better aligned with human intent. This exposes the limitations of LLM in handling confusing or uncertain inputs and leaves room for improvement. It's worth noting that “Question Rephrasing” feedback occurred only twice in Python code generation, potentially explaining its higher success rate. In Python code generation, user feedback in the form of “Step-by-Step Instruction” ranks second. We investigated user behavior regarding this type of feedback. Through post-task interviews, we found that despite participants having limited Python programming knowledge, they were able to learn from our explanations and incorporate their own solutions into the feedback. This underscores the potential utility of our explanations in educating novice programmers, particularly in introductory programming classes. Despite the achievement in incorporating human feedback in , we observed a notable gap in the success rate from the user study especially in SQL. We queried each participant about their experiences with the successful application of their feedback in the error correction phase. 7 out of 10 participants using reported that their feedback. However, the remaining participants noted instances where their feedback did not yield the desired results. A closer examination of these cases revealed various contributing factors. Firstly, user feedback was sometimes too vague or abstract and lacked the specificity needed for precise corrections. Secondly, misaligned reasoning between participants' mental models and the model's thought led to suggestions based on flawed assumptions. Lastly, for complex code, small changes in the explanation made it challenging for users to track, emphasizing the importance of highlighting those changes in explanations. In essence, narrowing communication gaps through improved explainability, alignment, and transparency is essential for the effective application of user feedback. §.§ Comparison with the Baseline System (Vanilla GPT-3.5) Participants using the vanilla GPT-3.5 without our designed explanations consistently reported frustration since the raw code generated by the model was too challenging to understand, even when it was actually correct. Lacking professional programming experience, they struggled to understand the logic, structure, and meaning behind the code. This hindered their ability to identify errors or provide meaningful feedback to the system. As shown in Figures <ref> and <ref>, participants using vanilla GPT-3.5 need to understand each technical function in the source code before they can debug it. When they prompted GPT-3.5 to explain its reasoning process, the length and complexity of GPT-3.5's verbose technical descriptions posed an impenetrable barrier. Participants were overwhelmed by unfamiliar terminologies and concepts irrelevant to comprehending the core logic. Without the capacity to parse these abstruse explanations, users could neither efficiently evaluate the model's thought process nor supply useful debugging feedback. In contrast, participants highlighted that the concise yet informative NL explanations provided by elucidated the model's code generation reasoning clearly and in an accessible manner. By distilling complex technical concepts into easy-to-understand language, the explanations unlocked comprehension and debugability for non-professional programmers. Additionally, by pinpointing specific misunderstandings in localized areas of the explanations, users could provide meaningful feedback to correct errors. Overall, participants strongly affirmed that the NL translation of code logic, coupled with targeted debugging via the explanation-and-feedback loop, enhanced their ability to achieve successful code generation despite limited technical skills. § PERFORMANCE OF WITH GPT-4 AS BACKBONE LLM Our main investigation has been based on GPT-3.5, rather than the state-of-the-art GPT-4, as the backbone LLM. A natural question here is thus: for the effectiveness we have shown with in this study, as well as the findings we have discovered, are they still applicable when people switch to the more powerful GPT-4? Limited by the available resources and budget, it is infeasible to re-conduct the user study. However, to gain some preliminary insights, we still performed a pilot study with one participant who had no prior experience in SQL and Python programming. Given GPT-4's enhanced code generation capabilities, some questions that we used in the main user study were not suitable anymore. Specifically, we observed that 3 out of 10 questions in text-to-SQL and 2 out of 10 questions in Python code generation could be accurately answered by GPT-4 without any specific interaction design. Consequently, we excluded these questions and randomly selected additional questions to keep the same amount of questions. The results are presented in Table <ref>. The results show an improved success rate in text-to-SQL and a comparable success rate in Python code generation compared to GPT-3.5's (Table <ref>), which indicates that a stronger LLM could potentially yield even more effective human-LLM interaction for code generation. The results also demonstrate that our designed prompts and interaction paradigm can work with a more powerful LLM. To gain deeper insights into the differences between GPT-4 and GPT-3.5, we conducted an analysis focusing on the quality of their generated NL explanations. As expected, we observed that GPT-4 could produce explanations as precise as GPT-3.5's. Beyond preciseness, we found that in two cases, the explanations generated by GPT-4 show even higher quality in terms of comprehensibility, as exemplified in Table <ref>. This enhanced comprehensibility could potentially impact the user experience by providing clearer insights into the generated code. However, the observation that only two explanations reveal this enhanced comprehensibility indicates that there is still room for improvement even with a more advanced LLM. We have also examined whether the participant could provide effective feedback for error correction. In the pilot study, the participant mainly provided feedback of type “Instruction for Error Correction”, except one “Question Rephrasing” feedback for text-to-SQL. The observation reaffirms that with our designed explanations, users can find errors and provide feedback without directly interacting with the source code. However, its low success rate underscores the need for future exploration on this topic. § DISCUSSION Our innovative NL explanation and interaction paradigm introduces several noteworthy advantages that significantly enhance the user-friendly nature of interactive code generation, particularly for non-professional programmers. However, some limitations exist. Firstly, the inherent ambiguity in human language can result in LLMs generating incorrect answers. Addressing this challenge requires enhancing the model's ability to distinguish unclear concepts in the question and request user clarification on ambiguous terms. Such improvements would foster a more natural and user-centric interaction between humans and LLMs. In the future, researchers are encouraged to extend the interaction paradigm we presented in this work with this richer user-system interaction. Additionally, when dealing with more intricate code, the explanations fell short of fully capturing complex logical errors. Striking the optimal balance between detailedness and brevity in explanations remains an ongoing challenge. While the success rates affirm the effectiveness of user feedback for error correction, there is still potential for improvement. Future efforts focusing on comprehending and incorporating diverse user input could significantly enhance the interactive refinement process. As we discussed at the end of Section <ref>, users could benefit from “personalized” interactive code generation systems, where the way how an LLM explains a code can be customized to better fit their needs and preferences. § CONCLUSION In this work, we introduce a novel human-LLM interaction paradigm utilizing natural language explanations to enhance the user-friendliness of interactive code generation for non-professional programmers. The results from the user study validate the advantages of our approach. Results show the explanations aid comprehension and debugging, with users providing meaningful corrective feedback from the explanation. The interactive feedback cycle also successfully handles user feedback to refine the code. Together this leads to higher success rates and better overall experience than its counterpart vanilla LLM. § ACKNOWLEDGMENTS This project was sponsored by NSF SHF 2311468, GMU College of Computing and Engineering, and GMU Department of Computer Science. We appreciate the Office of Research Integrity and Assurance at GMU for their work in reviewing and approving our Institutional Review Board (IRB) application. We also appreciate comments from students in GMU NLP and SE labs. ACM-Reference-Format § PROMPTS USED IN §.§ Few-shot Code Generation §.§.§ Text-to-SQL Sample prompt template of few-shot code generation for text-to-SQL. CREATE TABLE department ( department_id number , name text , creation text , ranking number , budget_in_billions number , num_employees number , primary key ( department_id ) ) insert into department (department_id, name, creation, ranking, budget_in_billions, num_employees) values (1,'State','1789','1',9.9600000000000008526,30265.999999999999999); CREATE TABLE head ( head_id number , name text , born_state text , age number , primary key ( head_id ) ) insert into head (head_id, name, born_state, age) values (1,'Tiger Woods','Alabama',66.999999999999999998) ; CREATE TABLE management ( department_id number , head_id number , temporary_acting text , primary key ( department_id ) , foreign key ( head_id ) references head ( head_id ) , foreign key ( department_id ) references department ( department_id ) ) insert into management (department_id, head_id, temporary_acting) values (2,5,'Yes') ; Translate the following question into SQL. Question: In which year were most departments established? FDEE00SQL: SELECT creation FROM department GROUP BY creation ORDER BY COUNT(*) DESC LIMIT 1 CEB1FB[more demonstrations ...] FBCEB1<input question and database schema> §.§.§ Python Code Generation Sample prompt template of few-shot code generation for Python Code Generation. You are an expert Python programmer, and here is your task: Write a python function to identify non-prime numbers. Your code should pass these tests: assert is_not_prime(2) == False assert is_not_prime(10) == True assert is_not_prime(35) == True FDEE00Code: import math def is_not_prime(n): result = False for i in range(2,int(math.sqrt(n)) + 1): if n result = True return result CEB1FB[more demonstrations ...] FBCEB1<input question and tests> §.§ Explanation Generation §.§.§ Explanation for Text-to-SQL Sample prompt template of explanation generation for text-to-SQL. Translate the following SQL into question. The question should be consistent with the SQL and follow a similar style as the original question. SQL: select t1.name from person as t1 join personfriend as t2 on t1.name = t2.name where t2.friend in (select name from person where age > 40) except select t1.name from person as t1 join personfriend as t2 on t1.name = t2.name where t2.friend in (select name from person where age < 30) Original Question: Find the name of the person who has friends with age above 40 but not under age 30? FDEE00Explanation: What is the name of the person who has friends older than 40 but does not have friends younger than 30? SQL: select status_code from bookings group by status_code order by count(*) desc limit 1 Original Question: What is the most frequent status of bookings? FDEE00Explanation: Which status code appears most often in bookings? CEB1FB[more demonstrations ...] FBCEB1<input SQL query and corresponding original question> §.§.§ Explanation for Python Code Generation Sample prompt template of explanation generation for Python Code. You are an expert Python programmer. Your task is to write a description for the following Python program. The description should be accurate, concise, and easily understood by non-programmers. def similar_elements(test_tup1, test_tup2): res = tuple(set(test_tup1) set(test_tup2)) return (res) FDEE00Explanation: This program takes two lists as input and returns a new list containing the common elements between the two input lists. import math def is_not_prime(n): result = False for i in range(2,int(math.sqrt(n)) + 1): if n result = True return result" FDEE00Explanation: This program checks if a given number is not a prime number. It does this by iterating through all numbers from 2 to the square root of the given number and checking if any of them divide the number evenly. If a divisor is found, the program returns True, indicating that the number is not prime. Otherwise, it returns False, indicating that the number is prime. def count_ways(n): A = [0] * (n + 1) B = [0] * (n + 1) A[0] = 1 A[1] = 0 B[0] = 0 B[1] = 1 for i in range(2, n+1): A[i] = A[i - 2] + 2 * B[i - 1] B[i] = A[i - 1] + B[i - 2] return A[n] FDEE00Explanation: This program calculates the number of ways to climb a staircase with n steps using two different types of steps. The program uses dynamic programming to store the number of ways at each step and then returns the total number of ways to reach the top step. CEB1FB[more demonstrations ...] FBCEB1<input Python code> §.§ Feedback-driven Error Correction §.§.§ Text-to-SQL Sample prompt template of error correction for text-to-SQL. CREATE TABLE department ( department_id number , name text , creation text , ranking number , budget_in_billions number , num_employees number , primary key ( department_id ) ) insert into department (department_id, name, creation, ranking, budget_in_billions, num_employees) values (1,'State','1789','1',9.9600000000000008526,30265.999999999999999); CREATE TABLE head ( head_id number , name text , born_state text , age number , primary key ( head_id ) ) insert into head (head_id, name, born_state, age) values (...) ; B0BF1A[all remaining database schemas in the same format ...] Translate the following question into SQL. For a multi-turn scenario, user can provide feedback to correct errors in the restated question which is generated from the SQL. Question: In which year were most departments established?" SELECT Creation, COUNT(*) AS num_departments FROM department GROUP BY Creation ORDER BY num_departments DESC LIMIT 1" FDEE00Restated Question: In which year were the most departments established, and how many departments were established in that year? 46FF36Feedback: Do not need to count the number of departments. FDEE00Refined SQL: SELECT Creation AS num_departments FROM department GROUP BY Creation ORDER BY num_departments DESC LIMIT 1 CEB1FB[more demonstrations ...] FBCEB1<input question, database schema, restated question, feedback> §.§.§ Python Code Generation Sample prompt template of error correction for Python Code. You are an expert Python programmer, and your task is to correct the errors in the Original Python code based on User Feedback to answer the question. The user feedback is collected based on the Original Python Code Description. Question: Write a python function to merge the elements in the same column separately in a list of lists. Your code should pass these tests: assert merge([['x', 'y'], ['a', 'b'], ['m', 'n']]) == [['x', 'a', 'm'], ['y', 'b', 'n']] assert merge([[1, 2], [3, 4], [5, 6], [7, 8]]) == [[1, 3, 5, 7], [2, 4, 6, 8]] assert merge([['x', 'y','z' ], ['a', 'b','c'], ['m', 'n','o']]) == [['x', 'a', 'm'], ['y', 'b', 'n'],['z', 'c','o']] def merge(last): first_elements = [sublist[0] for sublist in last] last_elements = [sublist[-1] for sublist in last] merged_list = [first_elements, last_elements] return merged_list FDEE00Description: This program takes a list of sublists as input and merges them into a new list. It first extracts the first elements from each sublist and creates a new list with these elements. Then, it extracts the last elements from each sublist and adds them to the new list. Finally, it returns the merged list containing the first and last elements from each sublist. 46FF36Feedback: You should merge all elements at the same location. FDEE00Refined Python Code: def merge(last): return [list(ele) for ele in list(zip(*lst))] CEB1FB[more demonstrations ...] FBCEB1<input question, tests, description, feedback>
http://arxiv.org/abs/2405.09955v2
20240516100116
Dual-band feature selection for maturity classification of specialty crops by hyperspectral imaging
[ "Usman A. Zahidi", "Krystian Łukasik", "Grzegorz Cielniak" ]
cs.CV
[ "cs.CV" ]
inst1]Usman A. Zahidi [inst1]organization=Lincoln Institute of Agri-food Technology, University of Lincoln, Lincoln, UK, addressline=Riseholme Park, city=Lincoln, postcode=LN2 2BJ, country=United Kingdom inst1]Krystian Łukasik inst1]Grzegorz Cielniak The maturity classification of specialty crops such as strawberries and tomatoes is an essential agricultural downstream activity for selective harvesting and quality control (QC) at production and packaging sites. Recent advancements in Deep Learning (DL) have produced encouraging results in color images for maturity classification applications. However, hyperspectral imaging (HSI) outperforms methods based on color vision as it captures changes in biological attributes, such as an abundance of pigment (anthocyanin and lycopene) and chlorophyll catabolism during the maturity process, by spectral variation. Multivariate analysis methods and Convolutional Neural Networks (CNN) deliver promising results; however, a large amount of input data and the associated preprocessing requirements cause hindrances in practical application. Conventionally, the reflectance intensity in a given electromagnetic spectrum is employed in estimating fruit maturity. We present a feature extraction method to empirically demonstrate that the peak reflectance in subbands such as 500-670 nm (pigment band) and the wavelength of the peak position, and contrarily, the trough reflectance and its corresponding wavelength within 671–790 nm (chlorophyll band) are convenient to compute yet distinctive features for the maturity classification. The proposed feature selection method is beneficial because preprocessing, such as dimensionality reduction, is avoided before every prediction, and the amount of required data for model training is smaller when compared to State-Of-The-Art (SOTA) methods, which eases its deployment. The feature set is designed to capture these traits. The best SOTA methods, among 3D-CNN, 1D-CNN, and SVM, achieve at most 90.0 % accuracy for strawberries and 92.0 % for tomatoes on our dataset. Results show that the proposed method outperforms the SOTA as it yields an accuracy above 98.0 % in strawberry and 96.0 % in tomato classification. A comparative analysis of the time efficiency of these methods is also conducted, which shows the proposed method performs prediction at 13 Frames Per Second (FPS) compared to the maximum 1.16 FPS attained by the full-spectrum SVM classifier. Robotic harvesting Strawberry, Tomato, Maturity estimation, Hyperspectral imaging. § INTRODUCTION Strawberry is a high-value and nutritious fruit with formidable economic value. It is non-climacteric, which implies that it is not attributed to post-harvest ripening. Moreover, its shelf life is also limited. Therefore, it is vital to ascertain the right time for harvesting. Conversely, tomatoes are medium-value and climacteric fruits with more significant production and consumption scales. Moreover, the QC procedures at distributors and supermarkets ensure that anomalous fruits do not end up in the end-user. Thus, maturity stage estimation in both fruits is essential. Due to the sensitivity of the task, expert human harvesters and QC personnel are employed worldwide. However, many key production areas (e.g., UK, US, NL, ES, JP) are now facing severe labor shortages; therefore, developing alternative robotic harvesting and packaging solutions is inevitable. Identifying the correct maturity stage of strawberries and tomatoes is crucial. The maturity classification algorithms have been the topic of interest in the research community for decades <cit.> due to their application in selective harvesting <cit.> and quality control procedures <cit.>. Many deep learning and feature-based approaches address this problem by employing color and hyperspectral vision. §.§ Color Vision Contemporary state-of-the-art DL is employed to estimate the maturity stages of specialty crops with feature-based approaches using color images. The application in strawberry maturity classification includes several research contributions such as <cit.> applied YOLOv3 model for classifying eight maturity levels with a mean Average Precision (mAP) of 0.89. <cit.> compared feature-based and CNN classification and reported that CNNs better classify unripe strawberries. However, they found that penalized multinomial regression has an accuracy of 86.4 %. Similarly, <cit.> compared several feature-based approaches with CNN and reported the supremacy of CNNs with 88.0 % accuracy.<cit.> used a dark channel enhancement algorithm to preprocess strawberry images taken at night and finally achieved a ripeness recognition accuracy of over 90.0 % on YOLOv5. <cit.> assessed maturity classification in ten levels by several pre-trained CNNs and reported EfficientNetB2 as the best classifier with 73.0 % accuracy. Similarly, <cit.> applied SE-YOLOv3-MobileNetV1 network to classify tomato maturity with higher speed and mAP of 0.97. <cit.> employed Mask-RCNN-based instance segmentation on strawberry with Resnet-50 backbone on 100 images that achieved an average detection precision rate of 95.8 %, a recall rate of 95.4 %, and a mean intersection over union (mIoU) rate of 89.8 %. Similarly, in the domain of tomato classification, many DL models are applied, such as <cit.> proposed a convolutional transformer for tomato maturity classification on color images surpassing the state-of-the-art on common benchmark datasets. <cit.> employed Yolov5m and Yolov5m combined with ResNet-50, ResNet-101, and EfficientNet-B0, respectively, for classifying tomato fruit on the vine into three classes: ripe, immature, and damaged with an accuracy of 97 %. <cit.> developed a Faster R-CNN model named MatDet for tomato maturity detection, which uses ResNet-50 as the backbone and RoIAlign to obtain more precise bounding boxes and a Path Aggregation Network to address the difficulty of detecting tomato maturity in complex scenarios, their results report mAP of 96.0 %.<cit.> employed VGG, Inception, and ResNet after transfer learning with their dataset for tomato maturity estimation and reported 97.0 % classification accuracy. Following a feature-based approach <cit.> applied fuzzy classification architecture on the RGB color model with descriptors to achieve the classification result with MSE of 0.537 × 10^-3. A few classical feature-based classification approaches are also employed, such as <cit.> utilized L^* and a^* features to estimate six stages of ripening and two stages of storage for their model in TomatoScan, which was also able to determine the ripening stage of tomatoes with an overall accuracy of 75.0 %. <cit.> proposes a Fuzzy Rule-Based Classification approach to estimate the ripeness of tomatoes based on color, which achieved approximately 94.0 % accuracy in classifying six USDA standardized classes. <cit.> applied Principal Components Analysis (PCA) in addition to Support Vector Machine (SVM) and Linear Discriminant Analysis algorithms for feature extraction and classification, respectively, and reported 84.0 % classification accuracy. §.§ Hyperspectral Vision Although deep learning models produce excellent results with accuracy approaching up to 90.0 % in RGB images, the annotation process is tedious and time-consuming as strawberries have partial regions of varying maturity stages, which may cause contradictive data in instance classification. Furthermore, researchers typically develop their empirical maturity classes with differing numbers. Therefore, a comparative analysis could not be drawn directly. Like computer vision, Deep learning is applied for crop maturity classification in hyperspectral images. For example, <cit.> developed 1D and 3D residual networks for strawberry hyperspectral data. They reported classification accuracy above 84.0 % in both models. <cit.> first established distinctive wavelengths for strawberry ripeness classification and then applied an AlexNet-based deep learning model on their empirical maturity classes with an accuracy of 98.6 %. Several alternative approaches, such as SVM, PLS, and PCA, are proven to get promising results for strawberry ripeness classification. <cit.> investigated several traits of strawberries, such as water content, solid soluble content, firmness, and ripeness, on data acquired from a set of 43 strawberries. They obtained data through a spectro-radiometer within 300 nm to 2500 nm. Ripeness classification was performed in full-spectrum by SVM and achieved up to 98.0 % accuracy. <cit.> evaluated strawberry ripeness by HSI systems having focus within two spectral windows of 380 nm-1030 and 874 nm–1734 nm. They defined three classes (ripe, mid-ripe, and unripe), employed PCA for optimal band selection, eventually classified combined windows data by SVM, and reported 85.0 % accuracy of their method. <cit.> established multispectral indices to estimate strawberry's maturity. <cit.> used PLS and SVM to assess strawberry ripeness with an accuracy of 96.7 %. In tomato maturity classification <cit.> developed a support vector classifier model to determine tomato maturity and demonstrated the classification accuracy using the characteristic wavelength to achieve an accuracy of 95.8 %. <cit.> developed a semi-supervised algorithm based on Laplacian score and spectral information divergence and the sparse representation model based on class probability for classification with an accuracy of up to 97.0 %. <cit.> employed random forest, PLS, and recurrent neural networks (RNN) to develop models for predicting the maturity level. Results showed that the RNN model had a classification accuracy of 40.0 % higher than random forest and 17.0 % higher than PLS. In the prediction of quality parameters, RNN models had the highest R^2 value greater than 0.87, followed by PLS and random forest models. A major drawback of hyperspectral imaging-based approaches is the lack of a standard benchmark for evaluation and comparison. Moreover, the reported dataset's sample size is also smaller, making it infeasible for DL approaches to compete with multivariate alternatives. §.§ Proposed Approach The HSI's rich spectroscopic data helps us understand the biological basis of the fruit maturity process. Numerous biochemical changes occur during fruit ripening. Over 50 polypeptides show prominent changes at different stages of fruit development. Several specific enzymes associated with membranes, anthocyanin synthesis, and sucrose metabolism have been shown to increase in the strawberry during ripening along with chlorophyll decomposition <cit.>. Similarly, lycopene synthesis and chlorophyll catabolism are related to tomato ripening <cit.>. Levels of both sugars and acids vary significantly in ripe fruit, depending on cultivars and developmental conditions; therefore, these traits are not distinctive for maturity classification. Chlorophyll breakdown is generally an essential catabolic process of leaf senescence and fruit ripening <cit.>. The physiological activities during the strawberry and tomato ripening process indicate increased pigments such as anthocyanin and lycopene and decreased chlorophyll with ripening. The pigments have strong reflectance within their bands; therefore, extremum reflectance intensities and their position features imply abundance. We empirically demonstrate reflectance intensities at extremum points in the sub-spectrum of the VNIR range, and their respective positional information in wavelength is essential for classification based on the pigment and chlorophyll abundance. Therefore, we construct a set of statistical features and measure them in varying bandwidths in an iterative feature extraction algorithm that discovers the best feature set and locates their corresponding bands. On the contrary, the SOTA methods rely entirely on reflectance intensities and ignore the extremum reflectance's positional information. The specific contributions of our research are: * We propose a feature extraction technique that employs the peak reflectance, particularly its position (wavelength) within the pigment band (510 – 670 nm), to estimate the change in pigment abundance in strawberries. Contrarily, the minimum reflectance and its position within the chlorophyll band (671 – 790 nm) correlate with the degree of chlorophyll decomposition. Therefore, these features are sufficient to achieve high-quality strawberry maturity classification. * Similarly, for tomatoes, we propose the peak and trough positions within a relatively small pigment band (510 – 650 nm), peak and trough reflectance together with their positions within a narrower chlorophyll band (651 – 770 nm) correlate with chlorophyll catabolism. We also show that reflectance data within the visible spectrum, i.e., 450-780 nm, is sufficient for the maturity classification of both fruits. Moreover, it is demonstrated that once the features are selected from the proposed method, any non-linear classifier can achieve equally good results. A comparative analysis of the proposed method with SVM and CNN models concludes that the proposed method is superior to the SOTA in accuracy and Cohen-Kappa (κ). A comparative time performance analysis of prediction speed in FPS is also performed with SOTA methods. * A dataset comprising more than 620 and 540 annotated VNIR (450–850 nm) hyperspectral images of strawberries and tomatoes is made public to enable direct comparisons and benchmarking for further research on this topic. To the best of our knowledge, no public hyperspectral dataset of this size is available for maturity estimation research. The code and data are available from <cit.> and <cit.>, respectively. § MATERIALS AND METHODS The proposed method selects predefined features from a combination of variable bandwidth in hyperspectral reflectance data. These predefined features comprise statistical measures of the reflectance signature within variable subbands. These features are designed for experimental analysis after understanding the biological phenomena, such as the dependence of the ripening process on pigments and chlorophyll. For benchmarking on our dataset, we also tested SOTA methods based on CNN and SVM classifiers that take full-spectrum voxel data for classification. More than 1100 fruit samples are imaged and annotated by harvesting experts. The subsequent section describes the dataset creation, data analysis, preprocessing, distributions, and similarity analysis. Section <ref> describes the feature extraction method and the model architecture used in this process. Eventually, we include details about the baseline CNN models and their architectures used for comparative analyses. §.§ Image Acquisition and Processing The images were taken by a Visible and Near-Infrared line-scan hyperspectral imaging system shown in Fig. <ref> (c). This system consisted of a spectrograph from LGL AB, Sweden, a charge-coupled device (CCD) camera from Basler, four halogen lamps, a linear actuator, and a computer interface. The camera has a slit sampling of 368 pixels with a spectral resolution of 2.5 nm in dual binning. We constructed images by scanning 120 lines so the image has a spatial resolution of 120×368 px and a spectral resolution of 400 bands between 450 and 850 nm. They were spectrally cropped to avoid sensor noise due to lower spectral response at band extreme edges. The exposure time was set to 150 ms, and the distance between the lens and the translating platform was 22 cm. The non-uniform luminance distribution and dark current from the camera and the hyperspectral images required calibration before spectral reflectance extraction. The raw images were corrected using R=I_raw-I_d/I_w-I_d Eq.  <ref>, where R is spectral reflectance, I_raw is the raw image intensity, while I_w and I_d are white and dark references, respectively. The light scattering correction was applied using Multiplicative Scattering Correction (MSC) <cit.>, and the Savitzky–Golay filter smoothed the reflectance. The imaging system is connected to a personal computer with an Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz, an eight-core machine equipped with 64 GB memory, and an Nvidia RTX-4090 GPU with 12 GB GDDR memory. The system runs Ubuntu 22.04 OS and hosts Nvidia CUDA 11.4. The model prediction FPS is measured on this machine for all models. §.§ Dataset Preparation and Analysis There are no official standards for the classification of the various maturity stages of strawberries. Therefore, it varies from grower to grower; furthermore, supermarkets follow their own QC specifications for identifying unripe fruits. The traditional method of judging strawberry maturity stages manually determines strawberries' appearance, color, texture, flavor, and firmness, which is time-consuming <cit.>. Based on expert harvesters' opinions, we define seven maturity classes for strawberries ranging from Green to Overripe. The details of individual classes, their hyperspectral signatures, example class instance images, and their distribution in our dataset are given in Fig. <ref>. Our dataset comprises 624 Driscoll's strawberries of Katrina and Zara types, where samples were harvested during fall 2021 and summer 2023, which were grown in the research strawberry farms at the University of Lincoln as shown in Fig. <ref> (a). The types and distribution of our dataset are given in Tab. <ref>. Contrarily, the United States Department of Agriculture (USDA) officially established maturity classification standards for tomatoes in <cit.>. Following this standard, there are six classes, from Green to Red, in our dataset. The tomato dataset comprises 544 images. The ground truth (GT) of maturity classes for both strawberry and tomato samples was annotated manually by the expert harvesters on an instance basis. GT construction of the tomato images was performed according to the USDA specifications described in the document to ensure conformity. Tomatoes were sourced from Glasshouse at FlavourFresh Salad growers, as shown in Fig. <ref> (b). The images were taken in a lab environment, and the background and leaves were removed by adaptive thresholding of Normalized Difference Vegetation Index NDVI <cit.>. The mean spectra for the fruit region in images are labeled as individual fruit's GT spectra after removing the leaves, as shown in Fig. <ref>. The mean reflectance of all the fruit spectra belonging to a particular class is called class GT spectra, as shown in Fig. <ref>. These class spectra show a pattern of peak shift in pigment bands and reflectance hike in the chlorophyll band of both fruits. The number of both fruit samples is cumulatively above 1100. In the strawberry dataset, the instances for all classes have a decent sample size; however, the White, Pink, and Overripe are relatively lower. Similarly, for the tomato dataset, the frequency is relatively low in the Breaker and Turning classes, as shown in Fig. <ref>. The band-to-band covariance of hyperspectral data highlights the hotspot band regions of higher information, shown in Fig. <ref> for instance-level spectra of strawberries and tomatoes. The description of the band covariance method in HSI can be found at <cit.>. The value of covariance is similar in both fruits. The relative distribution of band covariance shows some similarity in bands where higher covariance appears between 520 and 600 nm. The tomato dataset also shows high covariance between 630 and 730 nm, which is relatively lower in strawberries. Both fruits are similar in color; however, there are a few differences, such as the strawberry has more textural information due to variable maturity regions, the presence of achenes, and a relatively diffuse Bi-directional Reflection Distribution Function (BRDF), <cit.>. The tomato possesses a relatively specular BRDF and has smoother and relatively uniform color distribution on its skin. The clustering of higher covariance in two regions gives clues about the changes in pigment and chlorophyll in dataset samples. §.§ Method The essential characteristic of the proposed method is feature selection and extraction within various reflectance subbands. Ideally, the hyperspectral unmixing of pigment, chlorophyll endmembers, and their abundance map by employing <cit.>,<cit.>, and <cit.> could give the required abundance information for maturity classification. However, this is computationally expensive and practically infeasible for deployment. Our feature extraction process finds minimal features correlated with the pigment and chlorophyll abundance, making it suitable for deployment. §.§.§ Feature Extraction A hyperspectral image ℐ∈ℝ^H× W× B consists of d=H×W spectra s∈ℝ^d × B represented as vectors of length B, where H,W and B represent height, width and bands, respectively. A physical scene renders its texture due to the variability of constituent materials called endmembers e. Linear Mixing Model (LMM) represents an image by pixel-wise spatial distribution of abundances a corresponding to the spectral set of e. Image pixel reflectance s_i in LMM is given in Eq. <ref>, where n represents additive s_i=∑_i=1^h×w a_i e_j + n white noise, i represents each voxel of the HSI image, and j represents the number of endmember spectra. The correlation of fruit maturity M_F with pigments such as anthocyanin and lycopene abundance a_p and chlorophyll abundance a_c is established in biological literature, <cit.> and <cit.>, as referred earlier. Hyperspectral unmixing could be applied to extract these abundances for corresponding pigment and chlorophyll endmembers. However, in our case, it requires full VNIR spectral resolution between 450 and 850 nm. We demonstrate that the abundances a_p and a_c correlate with fewer statistical measures within bands b such that b ⊆ B. Let R_F be the spectral reflectance of fruit, then the feature vector ℱ_M is defined in Eq. <ref>. The feature vector comprises the reflectance extremum points to capture the peak and trough reflectances, argmin, and argmax for determining the wavelengths of extremum, which is given in μ m to keep the value between 0 and 1. The statistical measures are mean, median, and area under R_F within band range B, including the higher order statistics, skewness, and kurtosis. ℱ_M(R_F,B)=[max(R_F),min(R_F),(R_F),(R_F),mean(R_F), median(R_F), area(R_F),skewness(R_F),kurtosis(R_F)]^T In our experiments, B=400 nm is divided into twenty equal bands, each with a width of b_w=20 nm, as shown in Fig. <ref>. The selection of fixed bandwidth is based on a trade-off between accuracy and computational cost; the larger the bandwidth, the smaller the time it consumes during the search. However, the smaller bandwidth would give a more precise window size, and larger ones may include some unnecessary wavelengths. We define b^L as a variable length successive band add-ons such that the superscript L=n × b_w where, 1 ≤ n ≤ 20. We also define a binary mask ℳ as all possible combinations of the length of ℱ_M and D is a collection of d, i.e., D=<d>, so D_L is the collection of variable datasets for fruit spectra for bands b^L as defined in Eq. <ref>. D_L(ℳ,R_F,b^L)=ℳ×ℱ_M(R_F,b^L) We construct a fully connected neural network (FCN) whose architecture is shown in Fig. <ref> (b). This network has the categorical cross-entropy loss function ℒ for k(M_F) or conventionally k classes, as given in Eq. <ref>, where t_i is the ground truth label and p_i is the Softmax probability for the i^th class, where the input vector of p belongs to D_L. ℒ(D_L)=-∑_i=1^k t_i log(p_i) , for k classes, The feature extraction method seeks the best subset of features given in Eq. <ref> and the subband ranges b^L. The feature search is implemented by applying all possible combination masks denoted as ℳ. The search starts from the first band and increments the width by b_w in each iteration such that the bandwidth is b^L_i for i_th iteration until it reaches the maximum band of the spectrum, as shown in Fig. <ref> (a). The spectral dimension of dataset d is sliced according to the b^L_i, and the feature vector ℱ_Mi of the iteration is computed and multiplied with the iteration mask ℳ_i before it is passed to the FCN model for classification and evaluation. A list of the iteration attributes, such as bandwidth, mask, test accuracy, and loss, is maintained as shown in Fig. <ref>. An exhaustive search is performed, and the cost function is minimized for all the datasets belonging to D_L; the feature set with maximum accuracy is selected, and its list is sorted in ascending order. Top q (default=10), features are selected from the list as per user configuration. The second pass picks all mutually exclusive or nominally overlapping in terms of b^L and the highest accuracy. Finally, datasets with minimal ℒ(D_L) are chosen from the list, and their mask's nonzero features are concatenated before model training in the next step. An example of two-band concatenation and their respective masks and accuracies are illustrated by example in Fig. <ref>. It is shown that the pigment band only employs two elements of the feature vector ℱ_M, namely the minimum reflectance of the band and the wavelength of the maximum reflectance. The chlorophyll band, however, requires four features at most: maximum, minimum reflectance, peak, and trough wavelength position. Fig. <ref> gives the feature extraction search flow; the search algorithm takes the full-spectrum data, ranging between 450 and 850 nm. The fixed width b_w=20 nm, so the spectrum has 20 fixed subbands. The search starts by selecting the first (left-most) bands as shown in the entity "Generate Combination of subbands" in Fig. <ref> (a). The feature vector of reflectance in this subband is calculated, and then a mask is applied. A train and test dataset for the feature vectors is created on which classification is performed, and an accuracy list is updated for the iteration. In the next iteration, the bandwidth is incremented by including the next consecutive band, the first pass of search. In the second pass, a similar search starts from the second band, and so on. This search is performed for all (2^9-1=511) feature masks, excluding the zero mask. This search outputs fewer combinations of bands with larger sizes than b_w. These bands are stacked together, and the iteration continues. During the second run with larger bandwidths, which is shown as "Coarse" in Fig. <ref> (a), the difference is that bands are not merged; that is, in the first pass, the features for the first and the second bands are first calculated separately and then concatenated together. The computation could explode as the process has to be performed for masked features of two or more bands. We only selected the best performers during the "Fine" bands search to counter this problem. In our case, the combination of two bands in such a way yielded a performance of above 95.0 % in both soft fruits, which is our stopping criteria. The number of iterations for model training was set to 100 epochs, and the batch size was 32, which was found through hyperparameter optimization. The final output from the algorithm, in our case, is the set of bands with minimum or no overlaps, and together, their feature data produce classification accuracy above the required performance threshold. §.§.§ Baseline Classifiers A comparative analysis of the proposed method is performed with CNN and SVM models. In this section, we describe two CNN models; both incorporate full-spectrum voxel data. The first model reshapes the input data into a 3D cube and then applies 3D convolution to the cube. This reshaping brings changes in the neighborhood such that distant data bands get closer enough to each other and lie inside the convolution kernel stride, making the potentially higher covariant bands more significant after convolution and pooling operations. Our images have a spectral resolution 400, which is extended to 512 by zero-padding after the last band to facilitate the data restructuring. The packing of length 512 hyperspectral vector into a (8 × 8 × 8) 3D Cube as shown in Fig. <ref> (a). The model comprises configurable numbers of convolution (Conv3D) layers before the max pooling (MP3D) layer, after which it is flattened, and an FCN is applied with a softmax output layer. Similarly, a 1D-CNN architecture is also employed, whose architecture is shown in Fig. <ref> (b). The model takes in the signature vectors and performs convolution through a set of convolutional layers; after the batch normalization, the max-pooling is applied, which is finally fed into the dense layers after passing through the flattening and dropout layers. The output of the model is in the categorical class format. However, the number of output classes for strawberry and tomato are different, which makes the output size different, as shown in Fig. <ref>. The activation function in both 1D and 3D models is ReLU, except for the last layer, which has Softmax activation. Finally, an SVM classifier with a polynomial kernel of degree 3 is also employed. Like the CNN models, the SVM classifier also takes the full-spectrum input data. §.§.§ Performance Metrics Accuracy scores are employed as performance metrics for the proposed feature extraction process, shown in Fig. <ref>. In addition to accuracy, which is sensitive to imbalance class distribution <cit.>, we chose to express the performance through κ coefficient <cit.>; therefore, the model used in features extraction based on accuracy is later evaluated by κ. The accuracy is given as Accuracy=(TP+TN)/(TP+TN+FP+FN), where TP is True Positive, FP is False Positive, FN is False Negative, and TN is True Negative. The κ metric considers the probability of a chance agreement between the classifier and the ground truth chance: κ measures the agreement between two evaluators, each dividing N items into mutually exclusive categories C. The definition of κ is given as κ=1-(1-p_o)/(1-p_e), where p_o is the relative agreement of observations between rating systems, p_e is the hypothetical probability of random agreement, and the observed data are used to calculate the probability of every observer observing each category randomly. If the rating is in complete agreement, then κ=1; otherwise, if it does not agree, then κ=0. These metrics measure the classification performance for strawberry and tomato datasets with GT annotations. § RESULTS This section presents the classification results of strawberry and tomato maturity for various feature vector sets and bandwidths. The proposed method is compared with 1D-CNN, 3D-CNN, and SVM classifiers. §.§ Feature Selection A preliminary analysis of the performance of individual features is investigated to ascertain the prominent ones through a combinatoric search on features, their masks, and reflectance subbands. Tab. <ref> lists the features in the order of the classification model's test accuracy for strawberries. Argmax and maximum within the pigment band are the top two performers with 91.7 % and 71.4 % accuracy, respectively. Similarly, minimum and argmin are the best within the chlorophyll band, with an accuracy of 55.0 % and 48.3 %, respectively. The same feature performs poorly in the adjacent band ranges. The results show that features approximate pigment abundance change within the first range, while chlorophyll decomposition occurs in the second range. The feature selection steps are similar for both strawberry and tomato datasets. The best individual feature's performance and corresponding band ranges are given in Tab. <ref>, ordered by accuracy. Similar to the strawberry dataset, there are two prominent subband regions; one ranges between 470 and 670 nm, and the other lies between 650 and 830 nm. The results of individual feature performance are given in Tab. <ref>. Argmax overperforms other features within the pigment band with an accuracy of 90.6 %, while the area under the reflectance signature within the Chlorophyll band performs better than others with an accuracy of 58.3 %. The variance in band ranges of the tomatoes is higher than that of the strawberries. The tomato features have a median accuracy of 75.0 % compared to 64.7 % in the pigment band, which implies better individual performance in tomatoes. Similarly, the median accuracy is 55.5 % compared to strawberry's 30.9 %. A combination of features is explored; Tab. <ref> lists the combination of best-performing features and their corresponding bands by accuracy in strawberries. We observe the presence of argmax in all top combinations within the range of 490-670 nm. Maximum and argmax are combined to yield the best-performing 95.2 % within its 510-670 nm subband. Similarly, within the 650–790 nm range, all listed combinations bear argmin and minimum. The maximum accuracy approaches 80.0 % for the combination of minimum and argmin; all other combinations constitute more features and yet underperform compared to it. Like strawberries, the combination of features in tomatoes contains argmax in top performing combination within the pigment band as shown in Tab. <ref>, and the minimum is common in all top performers from the Chlorophyll band. Argmax and argmin are the best feature combinations in the 510–650 nm pigment subband, with an accuracy of 94.3 %. The Chlorophyll band has a set of max, min, argmax, and argmin within the subband 650–770 nm, which achieves 72.2 % accuracy. The next level of feature combination for the strawberry classification yields results listed in Tables <ref> and <ref>. It shows higher dependence on maximum and argmax in the pigment band and various features within the chlorophyll band. The best accuracy of 98.7 % is achieved by combining the maximum and argmax selected from pigment and minimum, maximum, argmax, argmin, mean, median, and area under the curve from the chlorophyll band. The second best-performing combination approaches 98.0 % accuracy, utilizing only two features as above from pigment bands, combined with the minimum and argmin from the chlorophyll band. Due to the minimum number of features and the accuracy of approaching the best one, we select the feature set given in the second row of Tab. <ref> for classification, as it would be cost-effective to build and deploy. The maximum accuracy of 96.8 % is achieved in tomatoes with the argmax and argmin selected from the pigment bands and combined with the maximum, minimum, argmax, and argmin from the chlorophyll band as given in Tab. <ref>. The second combination with the highest accuracy is close to 96.5 % by cumulative features given in the first row of Tab. <ref>. Fig. <ref> and <ref> show the classification results of both fruits in the heat plot for their respective selected features in Tab. <ref> and <ref>. In strawberries, the chosen features, such as max and argmax in the pigment band, are constant for the top three results, whereas in tomatoes, the similar common features are argmax and argmin. However, the Chlorophyll bands have more than two features in strawberries and tomatoes. Later, the features for both bands are concatenated together to create a single vector in all cases. The best maturity classification has an accuracy of 98.7 % in strawberries. The nominal error appears due to confusion between Late-Red and Overripe, Red and Late-Red, Pink and Late-Pink, and Green and White classes. White and Red strawberries are classified ideally in this feature vector. The second row of Tab. <ref> lists the next feature vector, which merges pigment features with min, argmin from the Chlorophyll bands and yields an accuracy of 98.0 %. Furthermore, these four attributes are related to two single wavelengths in given bands compared to more aggregated features requiring complete hyperspectral data for computing mean and median. Our selection criteria for the model are based on higher accuracy and the least number of features. Therefore, we select the feature set in the second row of Tab. <ref>. The same feature set is classified with an SVM classifier. It achieved an accuracy of 98.0 %. This is shown in Fig. <ref> (b). The last row comprises max, argmax max, min, argmax, and argmin in the Chlorophyll band and has an accuracy of 97.5 %. Therefore, it is not considered further. Similarly, for tomatoes, the classification results for the selected characteristics in Tab. <ref> are shown in Fig. <ref>. The values of the pigment band features, such as argmax and argmin, are the same for the three top-performing results. Similarly, the chlorophyll band has min and max common in the top three. Fig. <ref> (a) shows the feature vector results, including the chlorophyll band's maximum, minimum, and argmin. The dual-band features are concatenated to create a single vector in all cases. The classification is 96.8 % accurate. Nominal errors appear due to classification errors between the Pink, Light-Red, and Red classes. The poor performance is in the classification of Red tomatoes, where 46.0 % cases are classified as Light-Red and 3.0 % as Pink. Green, Breaker, and Turning have accuracy between 99.0 % and 100.0 %. Fig. <ref> (b) shows results of the same selected feature, but the classification model used is SVM instead of FCN. It achieved an accuracy of 97.7 % as shown in Fig. <ref> (b). Therefore, FCN and SVM perform equally well on selected features, with SVM having a slight edge over the FCN classifier. The second row in Tab. <ref> shows the next characteristic vector, which merges pigment characteristics with minimum argmin from the chlorophyll band and has 96.5 % accuracy. The third row in Tab. <ref> shows higher classification errors in distinguishing between the last three classes. The characteristic vector includes maximum, minimum, argmax, and mean in the chlorophyll band, and its accuracy is 95.1 %. The selected features for both fruits were employed to predict background-subtracted hyperspectral images, as depicted in Fig. <ref>. It is visually demonstrated that pixel-level classification is plausible for future investigations. The class map ranges from one to six or seven ripeness levels, depending on the fruit, with zero being the background. §.§ Baseline Methods A comparison of the proposed method with 1D and 3D CNN models and SVM is included in this section. The CNN models used for classification are custom-built, their architectures are covered in Section <ref>. A summary of results according to our performance metric is shown in Tab. <ref>. The prediction FPS for images in our dataset is also included. An estimate of FPS is also computed for Real-time HSI (RT-HSI), i.e., Ultris S5 resolution (290×275×51), which has significantly lower spectral resolution than our camera. The 1D-CNN has a classification accuracy of 89.4 % in strawberries. There is more considerable confusion between Pink and White, White and Green, Red and Late-Red, and Overripe and Late-Red classes, respectively. The best performance is in the Green and Late-Pink classes, respectively. A slight slump in the performance is observed in tomato maturity classification; the overall accuracy drops to 88.3 %. The significant classification error is between the Turning and Breaker and Pink and Turning classes. The 3D-CNN model has improved strawberry classification performance with an accuracy of 89.8 % compared to the 1D counterpart. The confusion classes are similar to 1D-CNN except for the slightest confusion in the Late-Red class. The improvement in the tomatoes is also observed here; for example, accuracy increased to 90.3 %. Contrary to 1D-CNN results, the worst score is in the Overripe class, which performs decently in the 1D case. Although 3D-CNN overperformed its 1D counterpart in strawberries and tomatoes, its accuracy difference from SVM's is slim. The accuracy in SVM is 89.3 %. The SVM performs worst in classifying White and Pink classes but performs well in ripe classes, i.e., Red and above. SVM performs the best among the SOTA methods in tomatoes with an accuracy of 91.6 %. The Overripe class has the highest error in SVM, followed by the White one. The proposed method outperforms all these methods by a significant margin of accuracy between 9.0–11.0 % in strawberries and 5.0–6.0 % in tomatoes. Although the classification error of the proposed method is marginal in strawberries, it has worse performance in differentiating between Red and Light-Red classes for tomatoes to the extent that the accuracy of the Red class is 53.0 %, as shown in Fig. <ref>. According to the trend in results of CNN and SVM classifiers for strawberry and tomato maturity classification, the highest errors are in the White, Pink, and Overripe classes. However, SVM error for strawberries is distributed across White, Pink, Late-Pink, and Red classes in strawberries and concentrates only at the Breaker class in tomatoes. § DISCUSSION This paper empirically demonstrates that a subset of features from hyperspectral images is sufficient to establish a maturity classification system for strawberries and tomatoes. A comprehensive feature extraction algorithm was developed, which seeks the best combination of variable bandwidths and predefined feature vectors associated with them, such as reflectance values at the peak or trough, their corresponding wavelength values and other statistical measures aggregated across the subband reflectance signature. It is shown that the wavelength position of extremums in different bands is vital information for the maturity classification problem. Moreover, the reflectance intensity at these points is essential supplementary information that improves the model accuracy when combined with extremum position data. We employed several statistical features for our analysis but observed that higher-order measures such as skewness and kurtosis underperformed. The max, min, argmax, and argmin. Luckily, the prominent features are related to a single wavelength, making the feature selection more straightforward. It should be noted that we found two wavelengths, but they vary from pixel to pixel. Therefore, hyperspectral data is required. A comparative analysis was performed with CNN models and SVM with full-spectrum reflectance data. 1D and 3D CNN models are constructed for comparative analysis, which takes the full-spectrum data as input for training and testing the model. Despite rich spectral input to these models, the features extracted in the proposed method outperform them significantly in both fruits. The analysis of results manifests that strawberries have higher confusion between White and Pink classes in all classification models except the proposed method. The increased classification errors appear in tomato for Breaker class in all models except the proposed model, which performs poorly in the Red class. Eventually, the results show that the selected feature of our method will perform equally well with any non-linear classifier. Despite having the same level of accuracy, the SVM with a polynomial kernel has higher computational efficiency than the FCN model employed in our proposed method. Hyperspectral imaging typically entails a relatively lower image acquisition speed; therefore, the number of images is fewer than conventional color images. The proposed method requires a relatively smaller amount of data for classification compared to SOTA DL methods. The feature selection for given fruits requires less preprocessing, which is typically necessary for full-spectrum reflectance input. This is evident in the time performance comparison of the proposed method with the SOTA, given in prediction FPS, and its maximum speedup is more significant than eleven compared to its counterparts. Constructing a real-time and cost-effective solution for maturity classification using the proposed method is plausible. § CONCLUSIONS AND FUTURE WORK We developed a search-based feature extraction method for strawberry and tomato maturity classification. A fixed number of masked features is calculated for combinatorically varying bandwidths, and their classification results are recorded in each iteration. The search was performed in full-spectrum and combinations of sub-spectrums. Unlike conventional band selection algorithms, we demonstrate that extremum points position data in some subbands provided significantly correlated information with maturity stages. The best-performing features and associated bandwidths were selected for both fruits. A comparative analysis was performed with CNN models and SVM. The proposed method outperformed the other classifiers with accuracy in strawberries up to 98.04 % compared to the 1D-CNN model's 87.92 % and 89.86 % of the 3D-CNN classifier's accuracy. In tomatoes, it has 97.78 % compared to the 1D-CNN model's 88.33 % and 90.34 % of the 3D-CNN classifier's accuracy. The SVM classifier achieved 89.37 % and 91.67 % accuracy in strawberries and tomatoes, respectively. It is observed that the classes where the sample size is smaller possess lower accuracy. The FCN model used for the feature selection process yields accuracy similar to the SVM, i.e., 98.07 % and 97.78 % for strawberries and tomatoes, respectively. This manifests that the features are distinctive and sufficient enough for good classification by any non-linear model. The computation efficiency of all classifiers was investigated. The proposed method with an SVM classifier was the best performer, having approximately 13 FPS compared to the next best 1.16 from full-spectrum SVM. All baseline methods required complete hyperspectral voxel reflectance data as input. The trained model was then applied to background removed images to produce a ripeness map for both fruits and calculate the pixel-based ripeness for classification visualization. Eventually, the proposed method reduces the preprocessing requirement and ensures the simplicity of the prediction model, which will ease the deployment process. This would enable us to construct a high-speed, cost-effective solution for maturity classification problems such as selective harvesting and QC in packaging sites. An interesting future work is to develop a regression model to estimate these peak and trough points from multispectral data. This would make the hardware cost-effective and more acceptable for commercial applications. The number of images employed for one type of fruit was around 600. Therefore, the DL models underperformed compared to the proposed method, so another future investigation would roughly quantify the number of images required for the DL model to compete closer to the proposed method. § ACKNOWLEDGEMENTS This work is partly supported by Innovate UK grant 10057282 funding and Research England Expanding Excellence in England for Lincoln Agri-Robotics (LAR). We want to acknowledge the contributions of the University of Lincoln staff, including Sophie Bowers, who curated fruit samples, and Dr. Robert Lloyd and Andrew Ham, who provided support related to laboratory equipment. elsarticle-num
http://arxiv.org/abs/2405.10099v1
20240516135453
Compositional Value Iteration with Pareto Caching
[ "Kazuki Watanabe", "Marck van der Vegt", "Sebastian Junges", "Ichiro Hasuo" ]
cs.LO
[ "cs.LO" ]
National Institute of Informatics, Japan The Graduate University for Advanced Studies (SOKENDAI), Japan Radboud University, Nijmegen, the Netherlands Watanabe et al. Compositional Value Iteration with Pareto CachingK.W. and I.H. are supported by ERATO HASUO Metamathematics for Systems Design Project (No. JPMJER1603) and the ASPIRE grant No. JPMJAP2301, JST. K.W. is supported by the JST grants No. JPMJFS2136 and JPMJAX23CU. S.J. is supported by the NWO Veni ProMiSe (222.147). Kazuki Watanabe1,2,Equal contribution. Marck van der Vegt3,2 Sebastian Junges3 Ichiro Hasuo1,2 May 20, 2024 =========================================================================================================================================================================================================================================================================================================================== The de-facto standard approach in MDP verification is based on value iteration (VI). We propose compositional VI, a framework for model checking compositional MDPs, that addresses efficiency while maintaining soundness. Concretely, compositional MDPs naturally arise from the combination of individual components, and their structure can be expressed using, e.g., string diagrams. Towards efficiency, we observe that compositional VI repeatedly verifies individual components. We propose a technique called Pareto caching that allows to reuse verification results, even for previously unseen queries. Towards soundness, we present two stopping criteria: one generalizes the optimistic value iteration paradigm and the other uses Pareto caches in conjunction with recent baseline algorithms. Our experimental evaluations shows the promise of the novel algorithm and its variations, and identifies challenges for future work. § INTRODUCTION MDP Model Checking and Value Iteration Markov decision processes (MDPs) are the standard model for sequential decision making in stochastic settings. A standard question in the verification of MDPs is: what is the maximal probability that an error state is reached. MDP model checking is an active topic in the formal verification community. Value iteration (VI) <cit.> is an iterative and approximate method whose performance in MDP model checking is well-established <cit.>. Several extensions with soundness have been proposed; they provide, in addition to under-approximations, also over-approximations with a desired precision <cit.>, so that an approximate answer comes with an error bound. These sound algorithms are implemented in mature model checkers such as Prism <cit.>, Modest <cit.>, and Storm <cit.>. Compositional Model Checking Even with these state-of-the-art algorithms, it is a challenge to model check large MDPs efficiently with high precision. Experiments observe that MDPs with more than 10^8 states are too large for those algorithms <cit.>—they simply do not fit in memory. However, such large MDPs often arise as models of complicated stochastic systems, e.g. in the domains of network and robotics. Furthermore, even small models may be numerically challenging to solve due to their structure <cit.>. Compositional model checking is a promising approach to tackle this scalability challenge. Given a compositional structure of a target system, compositional model checking executes a divide-and-conquer algorithm that avoids loading the entire state space at once, often solving the above memory problem. Moreover, reusing the model checking results for components can lead to speed-up by magnitudes. Although finding a suitable compositional structure for a given “monolithic” MDP is still open, many systems come with such an a priori compositional structure. For example, such compositional structures are often assumed in robotics and referred to as hierarchical models <cit.>. [6]r[0pt]7.8cm 0.7 [ innode/.style=draw, rectangle, minimum size=0.5cm, interface/.style=draw, rectangle, minimum size=0.5cm,] [orange] (-0.7cm, -1cm)–(-0.7cm, 1.2cm)–(3.8cm, 1.2cm)–(3.8cm, -1cm)–cycle; at (3.6cm, 1cm) A; [interface,fill=white,yshift=0.5cm] (s0) 1; [inner sep=0,right=-1.25cm of s0] (enr1) ; [interface,fill=white,yshift=-0.5cm] (s0o) 1; [inner sep=0,right=-1.25cm of s0o] (exl1) ; [state,right=0.6cm of s0, minimum size=0.5cm,fill=white] (s1) s_1; [state,right=0.6cm of s0o, minimum size=0.5cm,fill=white] (ssink) s_2; [inner sep=2pt, fill=black,right=0.5cm of s1] (a1a) ; [inner sep=2pt, fill=black,right=-1cm of ssink,yshift=0.5cm] (a1b) ; [interface, right=1.3cm of ssink,fill=white] (s2) 1; [interface, right=0.6cm of a1a,fill=white] (s3) 1; [inner sep=0,right= 0.5cm of s2] (exr1) ; [inner sep=0,right= 0.5cm of s3] (exr2) ; [->] (enr1) -> (s0); [->] (s0o) -> (exl1); [->] (s0) -> node [above] 1 (s1); [->] (s1) -> node [above] (a1a); [->] (s1) -> node [above] (a1b); [->] (a1a) -> node [above] 0.5 (s3); [->] (a1a) -> node [right] 0.5 (ssink); [->] (ssink) -> node [above] 1 (s0o); [->] (a1b) -> node [right,yshift=0.2cm] 1 (ssink); [->] (exr1) -> (s2); [->] (s3) -> (exr2); [->] (s2) -> node [above] 0.3 (ssink); [->] (s2) -> node [left] 0.7 (s3); [ innode/.style=draw, rectangle, minimum size=0.5cm, interface/.style=draw, rectangle, minimum size=0.5cm,] [cyan] (-0.7cm, -1cm)–(-0.7cm, 1.2cm)–(3.2cm, 1.2cm)–(3.2cm, -1cm)–cycle; at (3cm, 1cm) B; [interface, yshift=0.5cm,fill=white] (t0) 1; [interface, yshift=-0.5cm,fill=white] (t1) 1; [inner sep=0,right=-1.3cm of t0] (enr1) ; [inner sep=0,right=-1.3cm of t1] (enr2) ; [state,right=0.6cm of t0, yshift=-0.5cm, minimum size=0.5cm,fill=white] (t2) t_1; [interface, right=0.6cm of t2,fill=white] (t3) 1; [inner sep=0,right=0.4cm of t3] (exr1) ; [->] (enr1) -> (t0); [->] (t1) -> (enr2); [->] (t0) -> node [below] 0.3 (t2); [->] (t2) -> node [below] 1 (t1); [->] (t0) to [out=10,in=150] node [above] 0.7 (t3.west); [->] (t3) -> (exr1); open MDPs A and B. Recently, string diagrams of MDPs are introduced for compositional model checking <cit.>; the current paper adopts this formalism. There, MDPs are extended with (open) entrances and exits (<ref>), and they get composed by sequential composition and sum ⊕. See <ref>, where the right-hand sides are simple juxtapositions of graphs (wires get connected in ). This makes the formalism focused on sequential (as opposed to parallel) composition. This restriction eases the design of compositional algorithms; yet, the formalism is rich enough to capture the compositional structures of many system models. By exploiting such given compositional structures, compositional probabilistic approaches provide approximations with certain guarantees by assuming certain conditions, and outperform monolithic approaches in such situations. Recently, String diagrams of MDPs <cit.> are proposed as a syntax for such compositional MDPs and naturally capture compositional structures by compositions of open MDPs (oMDPs) with algebraic operations (illustrated in <ref>); we recall a recent work <cit.> on string diagrams of MDPs in <ref>. Current Work: Compositional Value Iteration In this paper, we present a compositional value iteration (CVI) algorithm that solves reachability probabilities of string diagrams of MDPs, operating in a divide-and-conquer manner along compositional structures. Our approximate VI algorithm comes with soundness—it produces error bounds—and exploits compositionality for efficiency. Specifically, for soundness, we lift the recent paradigm of optimistic value iteration (OVI) <cit.> to the current compositional setting. We use it both for local (component-level) model checking and—in one of the two global VI stopping criteria that we present—for providing a global over-approximation. For efficiency, firstly, we adopt a top-down compositional approach where each component is model-checked repeatedly, each time on a different weight w, in a by-need manner. Secondly, in order to suppress repetitive computation on similar weights, we introduce a novel technique of Pareto caching that allows “approximate reuse” of model checking results. This closely relates to multi-objective probabilistic model checking <cit.>, without the explicit goal of building Pareto curves. Our Pareto caching also leads to another (sound) global VI stopping criterion that is based on the approximate bottom-up approach <cit.>. Our algorithm is approximate (unlike the exact one in <cit.>), and top-down (unlike the bottom-up approximate one in <cit.>). Experimental evaluation demonstrates its performance thanks to the combination of these two features. Contributions and Organization  We start with an overview (<ref>) that presents graphical intuitions. After formalizing the problem setting in <ref>, we move on to describe our technical contributions: * compositional value iteration for string diagrams of MDPs where VI is run in a top-down and thus by-need manner (<ref>), * the Pareto caching technique for reusing results for components (<ref>), * two global stopping criteria that ensure soundness (<ref>). We evaluate and discuss our approach through experiments (<ref>), show related work (<ref>), and conclude this paper (<ref>). Notations For a natural number m, we write [m] for {1, …, m}. For a set X, we write 𝒟(X) for the set of distributions on X. For sets X, Y, we write X⊎ Y for their disjoint union and f X⇀ Y for a partial function f from X to Y. § OVERVIEW This section illustrates our take on CVI with so-called Pareto caches using graphical intuitions. We describe MDPs as string diagrams over so-called open MDPs <cit.>. Open MDPs, such as A, B in <ref>, extend MDPs with open ends (entrances and exits). We use two operations and ⊕; see <ref>. That figure also illustrates the bidirectional nature of the formalism: arrows can point left and right; thus acyclic MDPs can create cycles when combined. String diagrams come from category theory (see <cit.>) and they are used in many fields of computer science <cit.>. §.§ Approximate Bottom-Up Model Checking [5]r[0pt]4.5cm 0.6 [ innode/.style=draw, rectangle, minimum size=0.5cm, interface/.style=draw, rectangle, minimum size=0.5cm,] [orange] (0cm, -1cm)–(0cm, 1cm)–(3cm, 1cm)–(3cm, -1cm)–cycle; [inner sep=0] (Aenr1d) at (1.5cm, 0cm) 1⋯; at (2.8cm, 0.9cm) 0.7A; [inner sep=0] (Aenr1d) at (-0.3cm, 0cm) ; [interface,fill=white, right = 0.5cm of Aenr1d] (Aenr1c) 0.8i_1; [inner sep=0, right = 0.4cm of Aenr1c] (Aenr1out) ; [inner sep=0, right = 1.1cm of Aenr1c, yshift=0.5cm] (Aexr1in) ; [interface,fill=white, right=0.4cm of Aexr1in] (Aexr1d) 0.8o_1; [inner sep=0, right = 1.1cm of Aenr1c, yshift=-0.5cm] (Aexr2in) ; [interface,fill=white, right=0.4cm of Aexr2in] (Aexr2d) 0.8o_2; [cyan] (3.4cm, -1cm)–(3.4cm, 1cm)–(6.4cm, 1cm)–(6.4cm, -1cm)–cycle; at (6.2cm, 0.9cm) 0.7B; [interface,fill=white,right=0.8cm of Aexr1d] (Benr1c) 0.8i_2; [interface,fill=white,right=0.8cm of Aexr2d] (Benr2c) 0.8i_3; [inner sep=0, right = 0.4cm of Benr1c] (Benr1out) ; [inner sep=0, right = 0.4cm of Benr2c] (Benr2out) ; [inner sep=0, right = 0.1cm of Benr1out, yshift=-0.5cm] (bcdots) 1⋯; [inner sep=0, right = 1.1cm of Benr1c, yshift=-0.5cm] (Bexr1in) ; [interface,fill=white,right=1.5cm of Benr1c, yshift=-0.5cm] (Bexr1d) 0.8o_3; [inner sep=0, right=0.5cm of Bexr1d] (Bexr1c) ; [->, thick] (Aenr1d) -> (Aenr1c); [->, thick] (Aenr1c) -> (Aenr1out); [->, thick] (Aexr1in) -> (Aexr1d); [->, thick] (Aexr2in) -> (Aexr2d); [->, thick] (Aexr1d) -> (Benr1c); [->, thick] (Aexr2d) -> (Benr2c); [->, thick] (Benr1c) -> (Benr1out); [->, thick] (Benr2c) -> (Benr2out); [->, thick] (Bexr1in) -> (Bexr1d); [->, thick] (Bexr1d) -> (Bexr1c); AB The first compositional model checking algorithm for string diagrams of MDPs is in <cit.>, which is exact. Subsequently, in <cit.>, an approximate compositional model checking algorithm is proposed. This is the basis of our algorithm and we shall review it here. Consider, for illustration, the sequential composition AB in <ref>, where the exit o_3 is the target. The algorithm from <cit.> proceeds in the following bottom-up manner. First Step: Model Checking Each Component Firstly, model checking is conducted for component oMDPs A and B separately, which amounts to identifying an optimal scheduler for each. At this point, however, it is unclear what constitutes an optimal scheduler: In the MDP A in <ref>, let's say the reachability probabilities ( RPr^σ_1(i_1→ o_1), RPr^σ_1(i_1→ o_2)) are (0.2,0.7) under a scheduler σ_1, and (0.6,0.2) under another σ_2. One cannot tell which scheduler (σ_1 or σ_2) is better for the global objective (i.e. reaching o_3 in AB) since B is a black box. Concretely, the context B of A is unknown. Therefore we have to compute all candidates of optimal schedulers, instead of one. This set is given by, for each component C and its entrance i, { schedulers σ | (RPr^σ(i→ o))_o:C's exit is Pareto optimal }. [13]r[0pt]5cm 0.6 [ axis lines = left, xmin=0, xmax=1, ymin=0, ymax=0.9, xtick=0, 0.2, 0.6, ytick=0, 0.2, 0.7, axis background/.style=fill=white, xlabel=1.5RPr(i_1→ o_1), ylabel=1.5RPr(i_1→ o_2) ] [point,label=0:σ_1] at (axis cs:0.2,0.7) ; [point,label=220:σ_2] at (axis cs:0.6,0.2) ; [ color=blue, ultra thick ] coordinates (0.2,0.7) (0.6, 0.2) ; [ color=blue, ultra thick, dashed ] coordinates (0.1, 0.73) (0.2,0.7) (0.35, 0.6) (0.5, 0.4) (0.6, 0.2) (0.62, 0.1) ; [ dotted ] coordinates (0.2,0.7) (0, 0.7) ; [ dotted ] coordinates (0.2,0.7) (0.2, 0) ; [ dotted ] coordinates (0.6,0.2) (0, 0.2) ; [ dotted ] coordinates (0.6,0.2) (0.6, 0) ; Pareto-optimal points [10]r[0pt]5cm width=4.5cm,compat=1.9 1 [ axis on top=true, axis lines = left, xmin=0, xmax=0.5, ymin=0, ymax=0.5, xticklabel=, yticklabel=, axis background/.style=fill=white, xlabel=RPr(i_1→ o_1), ylabel=RPr(i_1→ o_2) ] [fill=green!50, very thin] coordinates (0.35,0) (0.35,0.15) (0.32, 0.25) (0.25, 0.32) (0.15,0.35) (0,0.35) (0,0) – cycle; [fill=red!50, very thin] coordinates (0.5, 0.5) (0.5, 0) (0.44,0) (0.42,0.2) (0.38,0.32) (0.32, 0.38) (0.18, 0.42) (0, 0.44) (0, 0.5); [] at (axis cs:0.15,0.2) L_i_1; [] at (axis cs:0.41,0.44) ^2\ U_i_1; approximations (L_i_1, U_i_1). Here the Pareto optimality is a usual notion from multi-objective model checking (e.g. <cit.>); here, it means that there is no scheduler σ' that dominates σ in the sense that RPr^σ(i→ o) ≤RPr^σ'(i→ o) holds for each o and < holds for some o. The two points from the example can be plotted, see <ref>. The Pareto curve—the set of points (RPr^σ(i→ o))_o for the Pareto optimal schedulers σ in <ref>—will look like the dashed blue line in <ref>.The solid blue line is realizable by a convex combination of the schedulers σ_1 and σ_2. It is always below the Pareto curve. The algorithm in <cit.> computes guaranteed under- and over-approximations (L, U) of Pareto-optimal points <ref> for every open MDP. See <ref>; here the green area indicates the under-approximation, and the red area is the complement of the over-approximation, so that any Pareto-optimal points are guaranteed to be in their gap (white). These approximations are obtained by repeated application of (optimistic) value iteration on the open MDPs, i.e., a standard approach for verifying MDPs, based on <cit.>. We formalize these notions in <ref>. Second Step: Combination along , ⊕ The second (inductive) step of the bottom-up algorithm in <cit.> is to combine the results of the first step—approximations as in <ref>, and the corresponding (near) optimal schedulers <ref>, for each component C—along the operations , ⊕ in a string diagram. Here we describe this second step through the example in <ref>. It computes reachability probabilities [ RPr^σ,τ(i_1→ o_3) = RPr^σ(i_1→ o_1) ·RPr^τ(i_2→ o_3); + RPr^σ(i_1→ o_2) ·RPr^τ(i_3→ o_3) ] for each combination of Pareto-optimal schedulers σ (for A) and τ (for B) to find which combinations of σ,τ are Pareto optimal for AB. The equality <ref>—called the decomposition equality in <cit.>—enables compositional reasoning on Pareto-optimal points and on their approximations: Pareto-optimal schedulers for AB can be computed from those for A and B. This compositional reasoning can be exploited for performance. In particular, when the same component A occurs multiple times in a string diagram D, the model checking result of A can be reused multiple times. §.§ Key Idea I: from Bottom-Up to Top-Down The bottom-up approaches compute the Pareto curves independent of the context of the open MDP. One key idea is to move from bottom-up to top-down, a direction followed by other compositional techniques too, see <ref>. [6]r[0pt]5cm 0.7 [ innode/.style=draw, rectangle, minimum size=0.5cm, interface/.style=draw, rectangle, minimum size=0.5cm,] [orange] (0cm, -1cm)–(0cm, 1cm)–(3cm, 1cm)–(3cm, -1cm)–cycle; [inner sep=0] (Aenr1d) at (1.5cm, 0cm) 1⋯; at (2.8cm, 0.9cm) 0.7A; [inner sep=0] (Aenr1d) at (-0.3cm, 0cm) ; [interface,fill=white, right = 0.5cm of Aenr1d] (Aenr1c) 0.8i_1; [inner sep=0, right = 0.4cm of Aenr1c] (Aenr1out) ; [inner sep=0, right = 1.1cm of Aenr1c, yshift=0.5cm] (Aexr1in) ; [interface,fill=white, right=0.4cm of Aexr1in] (Aexr1d) 0.8o_1; [inner sep=0, right = 1.1cm of Aenr1c, yshift=-0.5cm] (Aexr2in) ; [interface,fill=white, right=0.4cm of Aexr2in] (Aexr2d) 0.8o_2; [cyan] (3.4cm, -1cm)–(3.4cm, 1cm)–(5.9cm, 1cm)–(5.9cm, -1cm)–cycle; at (5.7cm, 0.9cm) 0.7B; [interface,fill=white,right=0.8cm of Aexr1d] (Benr1c) 0.8i_2; [interface,fill=white,right=0.8cm of Aexr2d] (Benr2c) 0.8i_3; [state,right=0.4cm of Benr1c, minimum size=0.3cm,fill=white, yshift=-0.5cm] (s1) ; [interface,fill=white,right=0.3cm of s1] (Bexr1d) 0.8o_3; [inner sep=0, right=0.5cm of Bexr1d] (Bexr1c) ; [->, thick] (Aenr1d) -> (Aenr1c); [->, thick] (Aenr1c) -> (Aenr1out); [->, thick] (Aexr1in) -> (Aexr1d); [->, thick] (Aexr2in) -> (Aexr2d); [->, thick] (Aexr1d) -> (Benr1c); [->, thick] (Aexr2d) -> (Benr2c); [->, thick] (Benr1c) -> node [yshift=-0.1cm,right=-0.5cm] 0.80.2 (s1); [->, thick] (Benr2c) -> node [yshift=-0.15cm,right=-0.2cm] 0.80.7 (s1); [->, thick] (Benr1c) to [out=10,in=150] node[yshift=0.2cm,right=-0.2cm] 0.80.8 (Bexr1d); [->, thick] (Benr2c) to [out=-10,in=210] node[yshift=-0.2cm,right=-0.2cm] 0.80.3 (Bexr1d); [->, thick] (Bexr1d) -> (Bexr1c); AB For illustration, consider the sequential composition AB in <ref>; we have concretized B in <ref>. For this B, it follows that RPr(i_2→ o_3)=0.8 and RPr(i_3→ o_3)=0.3. Therefore the equality <ref> boils down to RPr^σ(i_1→ o_3) = 0.8·RPr^σ(i_1→ o_1) + 0.3·RPr^σ(i_1→ o_2). The equation <ref> is a significant simplification compared to <ref>: * in <ref>, since the weight (RPr^τ(i_2→ o_3),RPr^τ(i_3→ o_3)) is unknown, we must compute multidimensional Pareto curves as in <ref>; * in <ref>, since the weight is known to be (0.8,0.3), we can solve the equation using standard single-objective model checking. Exploiting this simplification is our first key idea. We introduce a systematic procedure for deriving weights (such as (0.8,0.3) above) that uses the context of an oMDP, i.e., it goes top-down along the string diagram. The procedure works for bi-directional sequential composition (thus for loops, cf. <ref>), not only for uni-directional as in <ref>. In the procedure, we first examine the context of a component C, approximate a weight w for C, and then compute maximum weighted reachability probabilities in C. We formalize the approach in <ref>. Potential performance advantages compared to the bottom-up algorithm in <cit.> should be obvious from <ref>. Specifically, the bottom-up algorithm draws a complete picture for Pareto-optimal points (such as <ref>) once for all, but a large part of this complete picture may not be used. In contrast, the top-down one draws the picture in a by-need manner, for a weight w only when the weight w is suggested by the context. [6]r[0pt]5cm width=4.5cm,compat=1.9 0.6 [ axis lines = left, axis on top=true, xmin=0, xmax=0.75, ymin=0, ymax=0.55, xticklabel=, yticklabel=, axis background/.style=fill=white, xlabel=RPr(i_1→ o_1), ylabel=RPr(i_1→ o_2) ] [fill=red!50, very thin] coordinates (0.65, 0.55) (0.65, 0) (0.55,0) (0.47, 0.38) (0,0.42) (0, 0.65); [ domain = 0:0.55, color=cyan, ultra thick, ] -4.75*x+2.6125; [ domain = 0:0.55, color=cyan, ultra thick, ] -4.75*x+2.05; [ domain = 0:0.55, color=cyan, ultra thick, ] -4.75*x+2.33; [ domain = 0:0.65, color=cyan, ultra thick, ] -4.75*x+2.9; [ domain = 0:0.65, color=cyan, ultra thick, ] -0.08510638297*x+0.42; [ domain = 0:0.65, color=cyan, ultra thick, ] -0.08510638297*x+0.355; [ domain = 0:0.65, color=cyan, ultra thick, ] -0.08510638297*x+0.485; [fill=green!50, very thin] coordinates (0.4,0) (0.4,0.15) (0.1,0.35) (0,0.35) (0, 0) – cycle; [point] at (axis cs:0.4,0.15) ; [point] at (axis cs:0.1,0.35) ; top-down approximation. The top-down approximation of Pareto-optimal points is illustrated in <ref>. Here a weight w is the normal vector of the blue lines; the figure shows a situation after considering two weights. §.§ Key Idea II: Pareto Caching Our second key idea (Pareto caching) arises when we try to combine the last idea (top-down compositionality) with the key advantage of the bottom-up approach <cit.>, namely exploiting duplicates. Consider the string diagram ABADAE in <ref>, for motivation, where we designate multiple occurrences of A by A_1, A_2, A_3 for distinction, from left to right. Let us run the top-down algorithm. The component E suggests the weight (0.8,0.3) for the two exits of A_3, and D suggest the weight (0.2,0.7) for the exits of A_2. Recalling that A_2 and A_3 are identical, the weighted optimization results for these two weights can be combined, leading to a picture like <ref>. Now, in <ref>, we go on to the component B. It suggests the weight (0.75, 0.3). * In the bottom-up approach <cit.>, performance advantages are brought by exploiting duplicates, that is, by reusing the model checking result of a component C for its multiple occurrences. * Therefore, also here, we wish to use the previous analysis results for A—for the weights (0.8,0.3) and (0.2,0.7)—for the weight (0.75,0.3). * Intuitively, (0.75,0.3) seems close enough to (0.8,0.3), suggesting that we can use the previously obtained result for (0.8,0.3). But this casts the following questions: what is it for two weights to be “close enough”? Is (0.75,0.3) really closer to (0.8,0.3) than to (0.2,0.7)? Can we bound errors—much like in <ref>—that arise from this “approximate reuse”? [9]r[0pt]4cm width=5cm,compat=1.9 0.75 [ axis lines = left, axis on top = true, xmin=0, xmax=0.75, ymin=0, ymax=0.55, xticklabel=, yticklabel=, axis background/.style=fill=white, xlabel=RPr(i_1→ o_1), ylabel=RPr(i_1→ o_2) ] [fill=green!50, very thin] coordinates (0.4,0) (0.4,0.15) (0.1,0.35) (0,0.35) (0, 0) – cycle; [fill=red!50, very thin] coordinates (0.65, 0.55) (0.65, 0) (0.55,0) (0.47, 0.38) (0,0.42) (0, 0.65); [ domain = 0:0.65, color=cyan, ultra thick, ] -3.8*x+2.19; [ domain = 0:0.55, color=cyan, ultra thick, ] -3.8*x+1.7; [ domain = 0:0.55, color=cyan, ultra thick, ] -3.8*x+1.93; [ domain = 0:0.65, color=cyan, ultra thick, ] -3.8*x+2.42; [point] at (axis cs:0.4,0.15) ; [point] at (axis cs:0.1,0.35) ; Pareto caching In <ref>, we use the existing theory on Pareto curves in multi-objective model checking from <cit.> to answer these questions. Intuitively, the previous analysis result (red and green regions) gets queried on a new weight w (the normal vector of the blue lines), as illustrated in <ref>. We call answering weighted reachability based on the Pareto curve Pareto caching. The technique can prevent many invocations of using VI to compute the weighted reachability for w. The distance between the under- and over-approximations computed this way can be big; if so (“cache miss”), we run VI again for the weight w. §.§ Global Stopping Criteria (GSCs) On top of two key ideas, we provide two global stopping criteria (GSCs) in <ref>: one is based on the ideas from OVI <cit.> and the other is a symbiosis of the Pareto caches with the bottom-up approach. Although ensuring the termination of our algorithm in finite steps with our GSCs remains future work, we show that our GSCs are sound, that is, its output satisfies a given precision upon termination. § FORMAL PROBLEM STATEMENT We recall (weighted) reachability in Markov decision processes (MDPs) and formalize string diagrams as their compositional representation. Together, this is the formal basis for our problem statement as already introduced above. §.§ Markov Decision Process (MDP) [MDP] An MDP M = (S, A, P) is a tuple with a finite set S of states, a finite set A of actions, and a probabilistic transition function P S× A ⇀S (which is a partial function, cf. notations in <ref>). A (finite) path (on M) is a finite sequence of states π (π_i)_i∈m. We write M for the set of finite paths on M. A memoryless scheduler σ is a function σ S→A; in this paper, memoryless schedulers suffice <cit.>. We say σ is deterministic memoryless (DM) if for each s∈ S, σ(s) is Dirac. We also write σ S→ A for a DM scheduler σ. The set of all memoryless schedulers on M is M, and the set of all DM schedulers on M is M. For a memoryless scheduler σ and a target state t∈ S, the reachability probability M, σst from a state s is given by M, σst∑_π∈MtMσ,s(π), where (i) the set Mt⊆M is defined by Mt{(π_i)_i∈m∈M|π = t, and π_i ≠ t for i∈m-1}, and (ii) the probability Mσ,s(π) is defined by Mσ,s(π)∏_i∈m-1∑_a∈ A P(π_i, a, π_i+1)·σ(π_i-1)(a) if π_1 = s and Mσ,s(π) 0 otherwise. Towards our compositional approach for a reachability objective, we must generalize the objective to a weighted reachability probability objective: we want to compute the weighted sum—with respect to a certain weight vector w—over reachability probabilities to multiple target states. The standard reachability probability problem is a special case of this weighted reachability problem using a suitable unit vector e as the weight w. [weighted reachability probability] Let M be an MDP, and T be a set of target states. A weight w on T is a vector w (w_t)_t∈ T∈^T. Let s be a state, and σ be a scheduler. The weighted reachability probability M,σ, Tws∈ from s to T over σ with respect to a weight w is defined naturally by a weighted sum, that is, M,σ, Tws∑_t∈ T w_t·M,σst. We write M, Tws for the maximum weighted reachability probability sup_σ M,σ, Tws . (The supremum is realizable; see e.g. <cit.>.) §.§ String Diagram of MDPs [oMDP] An open MDP (oMDP) A = (M, ) is a pair consisting of an MDP M with open ends =(, , , ), where , , , ⊆ S are pairwise disjoint and each of them is totally ordered. The states in ∪ are the entrances, and the states in ∪ are the exits, respectively. We often use superscripts to designate the oMDP A in question, such as ^A and ^A. We write A (m, m)→ (n, n) for the arities of A, where m ||, m ||, n ||, and n ||. We assume that every exit s is a sink state, that is, P(s, a) is undefined for any a∈ A. We can naturally lift the definitions of schedulers and weighted reachability probabilities from MDPs to oMDPs: we will be particularly interested in the following instances; 1) the weighted reachability probability A,σwiA,σ, ^Awi from a chosen entrance i to the set ^A of all exits; and 2) the maximum weighted reachability probability Awisup_σA, σwi from i to ^A weighted by w. We define string diagrams of MDPs <cit.> syntactically, as syntactic trees whose leaves are oMDPs and non-leaf nodes are algebraic operations. The latter are syntactic operations and they are yet to be interpreted. [string diagram of MDPs] A string diagram D of MDPs is a term adhering to the grammar D ::= 𝖼_A|DD|D⊕D, where A is a constant designating an oMDP A. The above syntactic operations , ⊕ are interpreted by the semantic operations below. The following definitions explicate the graphical intuition in <ref>. [sequential composition ] Let A, B be oMDPs, A = (m, m) → (l, l), and B = (l, l) → (n, n). Their sequential composition AB is the oMDP (M, ') where ' = (^A, ^B, ^B, ^A), M ((S^A⊎ S^B)∖ (^A⊎^B), A^A⊎ A^B, P) and P is [ P(s, a, s') P^D(s, a, s') if D∈{A, B}, s∈ S^D, a∈ A^D, and s'∈ S^D, P^A(s, a, i^A) if s∈ S^A, a∈ A^A, s' = i^B for some 1 ≤ i ≤l, P^B(s, a, i^B) if s∈ S^B, a∈ A^B, s' = i^A for some 1 ≤ i ≤l, 0 otherwise. ] [sum ⊕] Let A, B be oMDPs. Their sum A⊕B is the oMDP (M, ') where ' = (^A⊎^B, ^A⊎^B, ^A⊎^B, ^A⊎^B), M = (S^A⊎ S^B, A^A⊎ A^B, P), and P is given by P(s, a, s') P^D(s, a, s') if D∈{A, B}, s∈ S^D, a∈ A^D, and s'∈ S^D, and otherwise P(s, a, s') 0. [operational semantics D] Let D be a string diagram of MDPs. The operational semantics D is the oMDP which is inductively defined by Defs <ref> and <ref>, with the base case A=A. Here we assume that every string diagram D has matching arities so that compositions are well-defined. We call ^D and ^D global entrances and global exits of D, respectively. [6]r[0pt]7.5cm 0.7 [ innode/.style=draw, rectangle, minimum size=0.5cm, interface/.style=draw, rectangle, minimum size=0.5cm,] [orange] (0cm, -1cm)–(0cm, 1cm)–(3.1cm, 1cm)–(3.1cm, -1cm)–cycle; [inner sep=0] (Aenr1d) at (1.6cm, 0cm) 1⋯; at (2.8cm, 0.9cm) 0.7A_1; [inner sep=0] (Aenr1d) at (-0.3cm, 0.5cm) ; [interface,fill=white, right = 0.5cm of Aenr1d] (Aenr1c) 0.6i^A_1_1; [inner sep=0] (Aexl1c) at (-0.3cm, -0.5cm) ; [interface,fill=white, right = 0.5cm of Aexl1c] (Aexl1d) 0.6o^A_1_1; [inner sep=0, right = 0.4cm of Aenr1c] (Aenr1out) ; [inner sep=0, right = 0.4cm of Aexl1d] (Aexl1in) ; [inner sep=0, right = 1.1cm of Aenr1c] (Aexr1in) ; [interface,fill=white, right=0.4cm of Aexr1in] (Aexr1d) 0.6o^A_1_2; [inner sep=0, right = 1.1cm of Aexl1d] (Aenl1out) ; [interface,fill=white, right=0.4cm of Aenl1out] (Aenl1c) 0.6i^A_1_2; [orange] (3.4cm, -1cm)–(3.4cm, 1cm)–(6.5cm, 1cm)–(6.5cm, -1cm)–cycle; [inner sep=0] (A2enr1d) at (5cm, 0cm) 1⋯; at (6.2cm, 0.9cm) 0.7A_2; [inner sep=0] (A2enr1d) at (3.1cm, 0.5cm) ; [interface,fill=white, right = 0.5cm of A2enr1d] (A2enr1c) 0.6i^A_2_1; [inner sep=0] (A2exl1c) at (3.1cm, -0.5cm) ; [interface,fill=white, right = 0.5cm of A2exl1c] (A2exl1d) 0.6o^A_2_1; [inner sep=0, right = 0.4cm of A2enr1c] (A2enr1out) ; [inner sep=0, right = 0.4cm of A2exl1d] (A2exl1in) ; [inner sep=0, right = 1.1cm of A2enr1c] (A2exr1in) ; [interface,fill=white, right=0.4cm of A2exr1in] (A2exr1d) 0.6o^A_2_2; [inner sep=0, right = 1.1cm of A2exl1d] (A2enl1out) ; [interface,fill=white, right=0.4cm of A2enl1out] (A2enl1c) 0.6i^A_2_2; [cyan] (6.8cm, -1cm)–(6.8cm, 1cm)–(9.8cm, 1cm)–(9.8cm, -1cm)–cycle; at (9.6cm, 0.9cm) 0.7B; [interface,fill=white,right=0.8cm of A2exr1d] (Benr1c) 0.6i^B_1; [interface,fill=white,right=0.8cm of A2enl1c] (Bexl1d) 0.6o^B_1; [inner sep=0, right = 0.4cm of Benr1c] (Benr1out) ; [inner sep=0, right = 0.4cm of Bexl1d] (Bexl1in) ; [inner sep=0, right = 0.1cm of Benr1out, yshift=-0.5cm] (bcdots) 1⋯; [inner sep=0, right = 1.1cm of Benr1c, yshift=-0.5cm] (Bexr1in) ; [interface,fill=white,right=1.5cm of Benr1c, yshift=-0.5cm] (Bexr1d) 0.6o^B_2; [inner sep=0, right=0.5cm of Bexr1d] (Bexr1c) ; [->, thick] (Aenr1d) -> (Aenr1c); [->, thick] (Aenr1c) -> (Aenr1out); [->, thick] (Aexl1d) -> (Aexl1c); [->, thick] (Aexl1in) -> (Aexl1d); [->, thick] (Aexr1in) -> (Aexr1d); [->, thick] (Aenl1c) -> (Aenl1out); [->, thick] (Aexr1d) -> (A2enr1c); [->, thick] (A2exl1d) -> (Aenl1c); [->, thick] (A2enr1c) -> (A2enr1out); [->, thick] (A2exl1in) -> (A2exl1d); [->, thick] (A2exr1in) -> (A2exr1d); [->, thick] (A2enl1c) -> (A2enl1out); [->, thick] (A2exr1d) -> (Benr1c); [->, thick] (Bexl1d) -> (A2enl1c); [->, thick] (Benr1c) -> (Benr1out); [->, thick] (Bexl1in) -> (Bexl1d); [->, thick] (Bexr1in) -> (Bexr1d); [->, thick] (Bexr1d) -> (Bexr1c); D in Ex. <ref> For describing the occurrence of oMDPs and their duplicates in a string diagram D, we formally define nominal components D and components D. The latter for graph-theoretic operations in our compositional VI (CVI) (<ref>), while the former is for Pareto caching (<ref>). Examples are provided later in Ex. <ref>. [D, D] The set D of nominal components is the set of constants occurring in D (as a term). The set D of components is inductively defined by the following: A{A}, and E∗FE⊎F for ∗∈{, ⊕}; here we count multiplicities, unlike D. We introduce local open ends of string diagrams, in contrast to global open ends defined in Def. <ref>. [D, D (local)] The sets D and D of local entrances and exits of D are given by D_A∈D^A, and D_A∈D^A, respectively. Clearly we have D⊆D, D⊆D. Let D=AAB, where A and B are from <ref>. The oMDP D is shown in <ref>. Then D={A,B}, while D={A_1, A_2, B} with subscripts added for distinction. We have D={i^A_1_1} and D={o^A_1_1, o^B_2}, and D= { i^A_1_1, i^A_1_2, i^A_2_1, i^A_2_2, i^B_1} and D= { o^A_1_1, o^A_1_2, o^A_2_1, o^A_2_2, o^B_1, o^B_2}. Note also that D does not suppress exits removed in sequential composition, such as { o^A_1_2, o^A_2_1, o^A_2_2, o^B_1}. Problem: Near-Optimal Weighted Reachability Probability Given a string diagram D, an entrance i∈D, a weight w∈^D over exits, and an error bound ϵ∈, compute an under-approximation l∈ such that l≤Dwi≤ l+ϵ. We remark that as a straightforward extension, we can also extract a scheduler that achieves the under-approximation. § VI IN A COMPOSITIONAL SETTING We recap value iteration (VI) <cit.> and its extension to optimistic value iteration (OVI) <cit.> before presenting our compositional VI (CVI). §.§ Value Iteration (VI) and Optimistic Value Iteration (OVI) VI relies on the characterization of maximum reachability probabilities as a least fixed point (lfp), specifically the lfp μMT of the Bellman operator MT: the Bellman operator MT is an operator on the set ^S that intuitively returns the t+1-step reachability probabilities given the t-step reachability probabilities. Def. <ref> contains a formal treatment. A formal treatment can be found in <cit.>. Then the Kleene sequence ≤MT()≤MT^2()≤⋯ gives a monotonically increasing sequence that converges to the lfp μMT, where is the least element. This also applies to weighted reachability probabilities. While VI gives guaranteed under-approximations, it does not say how close the current approximation is to the solution μMT[The challenge applies to VI in (our) undiscounted setting, where the Bellman operator is not a contraction operator. With discounting, one can easily approximate the gap.]. The capability of providing guaranteed over-approximations as well is called soundness in VI, and many techniques come with soundness <cit.>. Soundness is useful for stopping criteria: one can fix an error bound η∈; VI can terminate when the distance between under- and over-approximations is at most η. Among sound VI techniques, in this paper we focus on optimistic VI (OVI) due to its proven performance record <cit.>. We use OVI in many places, specifically for 1) stopping criteria for local VIs in <ref>, 2) caching heuristics in <ref>, and 3) a stopping criterion for global (compositional) VI in <ref>. The main steps of OVI proceed as follows: 1) a VI iteration produces an under-approximation l for every state; 2) we heuristically pick an over-approximation candidate u, for example by u l+ϵ; and 3) we verify the candidate u by checking if MT(u)≤ u. If the last holds, then by the Park induction principle <cit.>, u is guaranteed to over-approximate the lfp μMT. If it does not, then we refine l,u and try again. See <cit.> for details. §.§ Going Top-Down in Compositional Value Iteration We move on to formalize the story of <ref>. <ref> is a prototype of our proposed algorithm, where compositional VI is run in a top-down manner. It will be combined with Pareto caching (<ref>) and the stopping criteria introduced in <ref>. A high-level view of <ref> is the iteration of the following operations: 1) running local VI in each component oMDP, and 2) propagating its result along sequential composition, from an entrance of a succeeding component, to the corresponding exit of a preceding component. See <ref> for illustration. We note that g and h can be merged in implementation. Here the domain of the merged function will be D∪D minus the exits that get removed in sequential composition. This way, we no longer need the propagation step. A weight w∈^D is an input; it is from Def. <ref>. The output is a function that assigns, to each global entrance i of D, a (guaranteed) under approximation of the optimal reachability probability Dwi. The algorithm maintains two main constructs: functions gD→ and hD→ that assign values to local entrances and exits, respectively. They are analogues of the value function f S→ in (standard) VI (<ref>); g and h get iteratively increased as the algorithm proceeds. This intuition should also explain the initialization (lines <ref>-<ref>), where we assign 0 to almost everywhere except for target states. We do not care about other states since they are addressed in in line <ref>. Lines <ref>–<ref> are the main VI loop, where we combine local VI (over each component A) and propagation along sequential composition. The algorithm takes the target oMDP A and its “local weight” as arguments; the latter is the restriction h|_^A^A→ of the function hD→. Any VI algorithm will do for ; we use OVI as announced in <ref>. in this work since the guaranteed over-approximation given by OVI is useful as both local VI (in line <ref>) and global stopping criteria (in line <ref>). Its local use is already described in <ref>; its global use is discussed in <ref>. The result of local VI is a function g_A^A→ for values over entrances of A. These get patched up to form gD→ in line <ref>. The function ∐_A∈Dg_A is defined by obvious case-distinction: it returns g_A(i) for a local entrance i∈^A. Recall from Def. <ref> that D=_A∈DI^A. In line <ref>, the values at entrances are propagated to the connected exits. On in line <ref>, its graphical intuition is in <ref>; here are some details. We first note that the set 𝔻 of local exits is partitioned into 1) global exits (i.e. those in 𝔻) and 2) those local exits that get removed by sequential composition. Indeed, by examining Defs <ref> and <ref>, we see that sequential composition is the only operation that removes local exits, and the local exits that are not removed eventually become global exits. It is also obvious (Def. <ref>) that each local exit o removed in sequential composition has a corresponding local entrance i_o. Using these, we define the function h(g,w), of the type 𝔻→, as follows: h(o)=w_o if o is a global exit (much like line 2); h(o)=g(i_o) otherwise. <ref> satisfies the following properties: * (Guaranteed under-approximation) For the output f of <ref>, we have f(i)≤Dwi for each i∈D. * (Convergence) Assume that is false. <ref> converges to the optimal value, that is, f converges to DwD. The correctness of the under-approximation of <ref> follows easily from those of (non-compositional, asynchronous) VI. The convergence depends on the fact that line <ref> of <ref> iterates over all components. § PARETO CACHING IN COMPOSITIONAL VI In our formulation of <ref>, there is no explicit notion of Pareto curves. However, in line <ref>, we do (implicitly) compute under-approximations on points on the Pareto curves. Here we recap approximate Pareto curves. We then show how we conduct Pareto caching, the key idea sketched in <ref>. §.§ Approximating Pareto Curves We formalize the Pareto curves illustrated in <ref>. For details, see <cit.>. Model checking oMDPs is a multi-objective problem, that determines different trade-offs between reachability probabilities for the individual exits. [Pareto curve for an oMDP <cit.>] Let A be an oMDP, and i be a (chosen) entrance. Let ,' ∈^^A. The relation ≼ between them is defined by ≼' if (o)≤'(o) for each o∈^A. When ≺' (i.e. ≼' and ≠'), we say ' dominates . Let σ be a scheduler for A. We define the point realized by σ, denoted by ^σ_i, by ^σ_i(o)A, σio, the reachability probability from i to o under σ. The set Aiσ of points achievable by σ is Aiσ{|≼^σ_i}. The set Ai of achievable points is given by Ai⋃_σ∈AAiσ. The Pareto curve 𝖯𝖺𝗋𝖾𝗍𝗈_i⊆^^A is the set of maximal elements in Ai wrt. ≼. We say a scheduler σ is Pareto-optimal if ^σ_i∈𝖯𝖺𝗋𝖾𝗍𝗈_i. The set Ai⊆^^A is convex, downward closed, and finitely generated by DM schedulers; it follows that, for our target problem, Pareto-optimal DM schedulers suffice. This is illustrated in <ref>, where a weight w is the normal vector of blue lines, and the maximum is achieved by a generating point for Ai. The last observations are formally stated as follows. [<cit.>] For any entrance i∈, the set Ai of achievable points is finitely generated by DM schedulers, that is, Ai = AiA . Here, X denotes the downward and convex closed set generated by X⊆^n, and AiA is given by AiA⋃_σ∈AAiσ, where A is the set of DM schedulers. [<cit.>] Given a weight w∈^^A and an entrance i, there is a scheduler σ such that 𝔻,σwi = 𝔻wi. Moreover, this σ can be chosen to be DM and Pareto-optimal. We now formulate sound approximations of Pareto curves, which is a foundation of our Pareto caching (and a global stopping criterion in <ref>). [sound approximation <cit.>] Let i be an entrance. An under-approximation L_i of the Pareto curve 𝖯𝖺𝗋𝖾𝗍𝗈_i is a downward closed subset L_i⊆Ai; an over-approximation is a downward closed superset U_i⊇Ai. A pair (L_i,U_i) is called a sound approximation of the Pareto curve 𝖯𝖺𝗋𝖾𝗍𝗈_i. In this paper, we focus on L_i and U_i that are finitely generated, i.e. the convex and downward closures of some finite generators L_i^g, U_i^g⊆^^A, respectively. A sound approximation of an oMDP 𝒜 is a pair (L,U), where L=(L_i)_i∈^A, U=(U_i)_i∈^A, and (L_i, U_i) is a sound approximation for each entrance i. The approximate bottom-up model checking algorithm <cit.>, outlined in <ref>, first computes a sound approximation for each component oMDP A (one can rely on existing algorithms <cit.>), and then combines them along and ⊕. An example shows it is hard to compose local (component-level) error bounds to obtain a global error bound <cit.>. Therefore the algorithm backtracks and refines sound approximations for components when a global error is excessive. §.§ Pareto Caching We go on to formalize our second key idea, Pareto caching, outlined in <ref>. In Def. <ref>, an index A∈D is a nominal component that ignores multiplicities, since we want to reuse results for different occurrences of A. [Pareto cache] Let D be a string diagram of MDPs. A Pareto cache is an indexed family ((L^A, U^A))_A∈D, where (L^A, U^A) is a sound approximation for each nominal component A, defined in Def. <ref>. As announced in <ref>, a Pareto cache —its component (L^A, U^A), to be precise—gets queried on a weight w∈^^A. It is not trivial what to return, however, since the specific weight w may not have been used before to construct . The query is answered in the way depicted in <ref>, finding an extremal point where L^A intersects with a plane with its normal vector w. [cache read] Assume the above setting, and let i be an entrance of interest. The cache read (L^A_i(w), U^A_i(w))∈^2 on w at i is defined by L^A_i(w)sup_∈ L_iw· and U^A_i(w)sup_∈ U_iw·. Recall from <ref> that we can assume L_i and U_i are finitely generated as convex and downward closures. It follows <cit.> that each supremum above is realized by some generating point, much like in Prop. <ref>, easing computation. We complement <ref> by <ref> that introduces our Pareto caching. Specifically, for the weight h|_^A in question, we first compute the error max_i∈^AU^A_i(h|_^A) - L^A_i(h|_^A) of the Pareto cache = ((L^A, U^A))_A with respect to this weight. The error can be greater than a prescribed bound η—we call this cache miss—in which case we run OVI locally for A (line <ref>). When the error is no greater than η—we call this cache hit—we use the cache read (Def. <ref>), sparing OVI on a component A∈D. In the case of a cache miss, the result (l,σ) of local OVI (line <ref>) is used also to update the Pareto cache (line <ref>); see below. Using a Pareto cache may prevent the execution of local VI on every component, which can be critical for the convergence of <ref>; see Thm. <ref>. A simple solution is to disregard Pareto caches eventually. Updating the cache Pareto caches get incrementally updated using the results for weighted reachabilities with different weights w. We build upon data structures in <cit.>. Notable is the asymmetry between under- and over-approximations (L_i,U_i): we obtain 1) a point in L_i and 2) a plane that bounds U_i. We update the cache after running OVI on a weight w∈^^A, which approximately computes the optimal weighted reachability to exits o∈^A. That is, it returns l,u∈ such that l ≤ sup_σ( w·( RPr^σ(i→ o) )_o∈^A ) ≤ u. Here i is any entrance and RPr^σ(i→ o) is the probability RPr^A,σ,{o}(i) in <ref>. What are the “graphical” roles of l,u in the Pareto curve? The role of u is easier: it follows from <ref> that any achievable reachability vector ( RPr^σ(i→o) )_o resides under the plane {|w·= u}. This plane thus bounds an over-approximation U_i. The use of l takes some computation. By <ref>, the existence of a good scheduler σ is guaranteed; but this alone does not carry any graphical information e.g. in <ref>. We have to go constructive, by extracting a near-optimal DM scheduler σ_0 (we can do so in VI) and using this fixed σ_0 to compute ( RPr^σ_0(i→o) )_o . This way we can plot an achievable point—a corner point in <ref>—in L_i. § GLOBAL STOPPING CRITERIA (GSC) We present the last missing piece, namely global stopping criteria (GSC in short, in line <ref> of <ref>). It has to ensure that the computed underapproximation f is ϵ close to the exact reachability probability. We provide two criteria, called optimistic and bottom-up. Optimistic GSC (Opt-GSC) The challenge in adapting the idea of OVI (see <ref>) to CVI is to define a suitable Bellman operator for CVI. Once we define such a Bellman operator for CVI, we can immediately apply the idea of OVI. For simplicity, we assume that CVI solves exactly in each local component (line <ref> in <ref>) without Pareto caching; this can be done, for example, by policy iteration <cit.>. Then, CVI (without Pareto caching and a global stopping criterion) on D is exactly the same as the (non-compositional) VI on a suitable shortcut MDP <cit.> of D. Intuitively, a shortcut MDP summarizes a Pareto-optimal scheduler by a single action from a local entrance to exit, see Def. <ref>. see <cit.> for the definition. Thus, we can regard the standard Bellman operator on the shortcut MDP as the Bellman operator for CVI, and define Opt-GSC as the standard OVI based on this characterisation. CVI with Opt-GSC (and Pareto caching) actually uses local under-approximations (not exact solutions) for obtaining a global under-approximation (line <ref> in <ref> and line <ref> in <ref>), where the desired soundness property still holds. See <ref> for more details. See <cit.> for more details. Bottom-up GSC (BU-GSC) We obtain another global stopping criterion by composing Pareto caches—computed in <ref> for each component A—in the bottom-up manner in <cit.> (outlined in <ref>). Specifically, 1) <ref> produces an over-approximation U^A for the Pareto curve of each component A; 2) we combine (U^A)_A along and ⊕ to derive an over-approximation U of the global Pareto curve; and 3) this U is queried on the weight w in question (i.e. the input of CVI), in order to obtain an over-approximation u of the weighted reachability probabilities. BU-GSC checks if this over-approximation u is close enough to the under-approximation l derived from g in <ref>. Correctness CVI (<ref> with Pareto caching under either GSC) is sound. The proof is in <ref>. The proof is in <cit.>. [ϵ-soundness of CVI] Given a string diagram D, a weight w, and ϵ∈, if CVI terminates, then the output f satisfies f(i)≤DwD≤ f(i)+ϵ, for each i∈D. Our algorithm currently comes with no termination guarantee; this is future work. Termination of VI (with soundness) is a tricky problem: most known termination proofs exploit the uniqueness of a fixed point of the Bellman operator, which must be algorithmically enforced e.g. by eliminating end components <cit.>. In the current compositional setting, end components can arise by composing components, so detecting them is much more challenging. § EMPIRICAL EVALUATION In this section, we compare the scalability of our approaches both among each other and in comparison with some existing baselines. We discuss the setup, give the results, and then give our interpretation of them. Approaches We examine our three main algorithms. Opt-GSC with either exact caching () or Pareto-caching (), and BU-GSC with Pareto-caching (). BU-GSC needs a Pareto cache, so we cannot run BU-GSC with an exact cache. We compare our approaches against two baselines: a monolithic () algorithm building the complete MDP D and the bottom-up () as explained in <cit.>. We use two virtual approaches that use a perfect oracle to select the fastest out of the specified algorithms: baselines is the best-of-the-baselines, while novel is the best of the three new algorithms. All algorithms are built on top of the probabilistic model checker Storm <cit.>, which is primarily used for model building and (O)VI on component MDPs as well as operating on Pareto curves. Setup We run all benchmarks on a single core of an AMD Ryzen TRP 5965WX, with a 900s time-out and a 16GB memory limit. We use all (scalable) benchmark instances from <cit.>. While these benchmarks are synthetic, they reflect typical structures found in network protocols and high-level planning domains. We require an overall precision of 10^-4, we run the Pareto cache with an acceptance precision of 10^-5, and solve the LPs in the upper-bound queries for the Pareto cache with an exact LP solver and a tolerance of 10^-4. The components are reverse topologically ordered, i.e., we always first analyse component MDPs towards the end of a given MDP D. To solve the component MDPs inside the VI, we use OVI for the lower bounds and precise policy iteration for the upper bounds. We use algorithms and data structures already present in Storm for maintaining Pareto curves <cit.>, which use exact rational arithmetic for numerical stability. Although our implementation supports exact arithmetic throughout the code, in practice this leads to a significant performance penalty, performing up to 100 times slower. For algorithms not related to maintaining the Pareto cache, we opted for using 64-bit floating point arithmetic, which is standard in probabilistic model checking <cit.>. Using floating point arithmetic can produce unsound results <cit.>; we attempt to prevent unsound results in our benchmark. First, we check with our setup that our results are very close (error <10^-5) to the exact solutions (when they could be computed). Second, we check that all results, obtained with different methods, are close. We evaluate the stopping criteria after ten iterations. These choices can be adapted using our prototypical implementation, we discuss some of these choices at the end of the discussion below. Results We provide pairwise comparisons of the runtimes on all benchmarks using the scatter plots in Fig. <ref>[A point (x,y) means that the approach on the x-axis took x seconds and the tool on the y-axis took y seconds. Different shapes refer to different benchmark sets.]. Notice the log-log scale. For some of the benchmark instances, we provide detailed information in Tables <ref> and <ref>, respectively. In Table <ref>, we give the identifier for the string diagram and the component MDPs, as well as the number of states in D. Then, for each of the five algorithms, we provide the timings in t, for each algorithm maintaining Pareto points, we give the number of Pareto points stored |P|, and for the three novel VI-based algorithms, we give the amount of time spent in an attempt to prove convergence (t_s). In Table <ref>, we focus on our three novel algorithms and the performance of the caches. We again provide identifiers for the models, and then for each algorithm, the total time spent by the algorithm, the time spent on inserting and retrieving items from the cache, as well as the fraction of cache hits H and the number of total queries Q. Thus, the number of cache hits is given by H· Q. The full tables and more figures are given in Appendix <ref>.  <cit.>. Discussion We make some observations. We notice that the CVI algorithms collectively solve more benchmarks within the time out and speed up most benchmarks, see Fig. <ref>(top-l).[We highlight that we use the benchmark suite that accompanied the bottom-up approach.] We refer to benchmark results in Tab. <ref>. Mostly Outperforms , Fig. <ref>(top-c). The monolithic VI as typical in Storm requires a complete model, which can be prohibitively large. However, even for medium-sized models such as Chains100-RmB, the VI can run into time outs due to slow convergence. CVI with the exact cache (and even with no cache) quickly converges – highlighting that the grouping of states helps VI to converge. On the other hand, a model such as Birooms100-RmS highlights that the harder convergence check can yield a significant overhead. Mostly Outperforms , Fig. <ref>(top-r). For many models, the top-down approach as motivated in <ref> indeed ensures that we avoid the undirected exploration of the Pareto curves. However, if the VI repeatedly asks for weights that are not relevant for the optimal scheduler, the termination checks fail and this yields a significant overhead. and Both Provide Clear Added Value, Fig. <ref>(bot-l). Both approaches can solve benchmarks within ten seconds that the other approach does not solve within the time-out. Both approaches are able to save significantly upon the number of iterations necessary. suffers from the overhead of the Pareto cache, see below, whereas requires somewhat optimal values in all leaves, regardless of whether these leaves are important for reaching the global target. Therefore, may profit from ideas from asynchronous VI and from adaptive schemes to decide when to run the termination check. Pareto Cache Has a Significant Overhead, Fig. <ref>(bot-c/r) and Tab. <ref>. We observe that the Pareto cache consistently yields an overhead: In particular, often outperforms . The Pareto cache is essential for . The overhead has three different sources. (1) More iterations: illustrates how requires only 14% of the iterations of . Even with a 66% cache hit rate in , this means an overhead in the number of component MDPs analysed. The main reason is that reusing approximation can delay convergence[Towards global convergence, we may eventually deactivate the cache.]. (2) Cache retrieval: To obtain an upper bound, we must optimize over Pareto curves that contain tens of halfspaces, which are numerically not very stable. Therefore, Pareto curves in Storm are represented exactly. The linear program that must be solved is often equally slow[We use the Soplex LP solver <cit.> for exact LP solving, which is significantly faster than using, e.g., Z3. Soplex may return unknown, which we interpret as a cache miss.] as actually solving the LP, especially for small MDPs. (3) Cache insertion: Cache insertion of lower bounds requires model checking Markov chains, as many as there are exits in the open MDPs. These times are pure overhead if this lower bound is never retrieved and can be substantial for large open MDPs. Opportunities for Heuristics and Hyperparameters. We extensively studied variations of the presented algorithms. For example, a much higher tolerance in the Pareto cache can significantly speed up on the cost of not terminating on many benchmark cases and one can investigate a per-query strategy for retrieving and/or inserting cache results. Interpretation of Results. works well on models that fit into memory and exhibit little sharing of open MDPs. works well when the Pareto curves of the open MDPs can be accurately be approximated with few Pareto points, which, in practice, excludes open MDPs with more than 3 exits. CVI without caching and termination criteria resembles a basic kind of topological VI[Topological VI orders strongly connected components, whereas CVI uses the hierarchical structure. This can also lead to advantages.] on the monolithic MDP. CVI can thus improve upon topological VI either via the cache or via the alternative stopping criteria. Based on the experiments, we conjecture that * the cache is efficient when the cost of performing a single reachability query is expensive — such as in the model — while the cache hit rate is high. * the symbiotic termination criterion () works well when some exits are not relevant for the global target, such as the model, in which going backwards is not productive. * the compositional OVI stopping criterion (/) works well when the likelihood of reaching all individual open MDPs is high, such as can be seen in the model. § RELATED WORK We group our related work into variations of value iteration, compositional verification of MDPs, and multi-objective verification. Value Iteration Value iteration as standard analysis of MDPs <cit.> is widely studied. In the undiscounted, indefinite horizon case we study, value iteration requires an exponential number of iterations in theory, but in practice converges earlier. This motivates the search for sound termination criteria. Optimistic value iteration <cit.> is now widely adopted as the default approach <cit.>. To accelerate VI, various asynchronous variations have been suggested that prevent operating on the complete state space. In particular topological VI <cit.> and (uni-directional) sequential VI <cit.> aim to exploit an acyclic structure similar to what exists in uni-directional MDPs. Sequentially Composed MDPs The exploitation of a compositional structure in MDPs is widely studied. In particular, the sequential composition in our paper is closely related to hierarchical compositions that capture how tasks are often composed of repetitive subtasks <cit.>. While we study a fully model-based approach, Jothimurugan et al. <cit.> provide a compositional reinforcement learning method whose sub-goals are induced by specifications. Neary et al. <cit.> update the learning goals based on the analysis of the component MDPs, but do not consider the possibility of reaching multiple exits. The widespread option-framework and variations such as abstract VI <cit.>, aggregate policies <cit.> into additional actions to speed up convergence of value iterations and is often applied in model-free approaches. In the context of OVI, we must converge everywhere and the bottom-up stopping criterion is not easily lifted to a model-free setting. Further Related Work As a different type of compositional reasoning, assume-guarantee reasoning <cit.> is a central topic, and a compositional probabilistic framework <cit.> with the parallel composition ∥ is also based on Pareto curves: extending string diagrams of MDPs for the parallel composition ∥ is challenging, but an interesting future work. We mention that there are VIs on Pareto curves solving multi-objective simple stochastic games <cit.>. Due to the multi-objectivity, they maintain a set of points for each state during iterations; CVI solves single-objective oMDPs determined by weights, thus we maintain a single value for each state during iterations. § CONCLUSION This paper investigates the verification of compositional MDPs, with a particular focus on approximating the behavior of the component MDPs via a Pareto cache and sound stopping criteria for value iteration. The empirical evaluation does not only demonstrate the efficacy of the novel algorithms, but also demonstrates the potential for further improvements, using asynchronous value iteration, efficient Pareto caches manipulations, and powerful compositional stopping criteria. splncs04 § BENCHMARK DETAILS §.§ Approaches and Hyperparameters Our implementation has the following hyperparameters: * Global OCVI ϵ, * Local OCVI η, * Cache tolerance τ, * N_, N_, number of CVI steps before performing OVI/Bottom-up termination check, respectively. All benchmark cases are formed by some combination of a string diagram D and leaf models M. §.§ String Diagrams Unidirectional Grid () The unidirectional grid string diagram contains a N × N grid of connected rooms. In the initial room at (1,1), there is an exit to the north and the east, leading to (1,2) and (2,1) respectively. Once an edge of the grid has been reached, i.e., (x, N) or (N, y), only the east and north exit are available, respectively. The goal of the string diagram is to reach the unique exit located in (N,N). Bidirectional Grid () The bidirectional grid is defined in the same way as the unidirectional grid, except that it is also possible to traverse between the rooms in the opposite directions, south and west. Unidirectional Chain with a loop (, ) The unidirectional chain string diagram () can be seen as a 1D version of the unidirectional grid. However, instead of having a unique exit at the end of the chain, in the string diagram, there is another exit that brings us back to the start of the chain, which makes the chain not rightward. §.§ Leaf Models Small Room () There are four different variants of the small room model, indicated by a pair out of the set {𝑆𝑎𝑓𝑒, 𝑈𝑛𝑠𝑎𝑓𝑒}×{𝑊𝑖𝑛𝑑𝑦, 𝐶𝑎𝑙𝑚}. This pair determines the dynamics of the room. The small room model consists of a 7 × 7 grid world with imprecise movement. After each movement action (north, east, south or west), there is some probability that the agent does not end up where it intended to move. This behaviour is more likely if the room is windy instead of calm. Furthermore, there are some holes in the grid, which cannot be exited once entered. The rooms that are unsafe contain more holes. The exits of the room are at the four center positions of each edge of the grid. Big Room () The big room model is defined in the same way as the small room, except that the dimensions of the grid are 101 instead of 7. Dice Game () The dice models contain a small dice game. The game is played in rounds, and the goal of the game is to score as many points as possible. In each round, there is a choice of three biased dice. After each round, the controller picks a die and throws it, and adds the (potentially negative) number on the die to their score. There are 100 rounds, and the score is clamped between 0 and 100. For the four-exit variant of the dice game, obtaining a score between 0 and 24 means that the first exit will be taken. Similarly, a score between 25 and 49 means that the second exit will be taken, and so forth. §.§ Plot Data The plot marks used in the scatter plots of <ref> can be found in <ref>. The raw data used for these plots can be found in <ref>. § OMITTED CONTENTS IN <REF> We review the standard Bellman operator[More precisely, we generalise the Bellman operator marginally to a weighted variant. The standard Bellman operator is recovered by setting the weights of target states to one and of other states to zero.] and the (non-compositional) value iteration on MDPs. [Bellman Operator] Let M be an MDP, T ⊆ S be target states, and w (w_t)_t∈ T be a weight. The Bellman operator MTw^S →^S (wrt. w) is given by MTw(f)(s) w_t if s = t∈ T, max_a∈ A∑_s'∈ S P(s, a, s')· f(s') if s∈ S ∖ T. The set ^S is a complete lattice with the functorial order and MTw is Scott-continuous <cit.>. Thus, the Bellman operator has a least fixed point MTw. The correctness of value iteration is ensured by Props <ref> and <ref>. In each iteration, the current value f S→, which is an under approximation of MTw, is updated by MTw(f). [<cit.>] For a state s∈ S, the least fixed point MTw(s) coincides with the maximum weighted reachability probabilities M, Tws. [<cit.>] Assume the setting of Def. <ref>. The limit of the ascending ω-chain ((MTw)^n())_n∈ is the least fixed point MTw of MTw, where the order ≼ on ^S is induced by the ordinal total order in , and is the least element in ^S. § OMITTED CONTENTS IN <REF> §.§ Shortcut oMDP for String Diagram of MDPs Given a string diagram 𝔻, and a weight w, we first introduce the shortcut Bellman operator 𝔻w as the standard Bellman operator on the operational semantics (𝔻) of the shortcut string diagram (𝔻), which is a summary of 𝔻 with only remaining open ends and setting actions as Pareto-optimal DM schedulers in each component oMDP A∈𝔻. Recall that the set ^A is the set of open ends in A. [shortcut oMDP <cit.>] Let A (M^A, ^A) be an oMDP. The shortcut oMDP A is an oMDP A (M, ^A) given by M (S, A, P), where S^A∪{⋆}, A A, and P(s, σ, s') A,σss' if s∈^A, σ∈A and s'∈^A, 1-∑_o∈^AA,σso if s∈^A, σ∈A, and s'=⋆, 0 otherwise. The shortcut oMDPs are similar to abstract MDPs (or macro MDPs) <cit.>; intuitively, we use them as summaries of oMDPs A whose probabilistic transitions are the reachability probabilities that are induced by Pareto-optimal DM schedulers on A. [shortcut string diagram] Let 𝔻 be a string diagram. The shortcut string diagram (𝔻) is a string diagram inductively given as follows: * if 𝔻A, then (𝔻) is the shortcut oMDP A, * if 𝔻𝔻_1∗𝔻_2 for ∗∈{, ⊕}, then (𝔻) (𝔻_1)∗(𝔻_2). The shortcut oMDP of the string diagram 𝔻 is the operational semantics (𝔻) of the shortcut string diagram (𝔻). The construction is indeed correct in the following sense: Let 𝔻 be a string diagram, and w be a weight. The equality 𝔻w = (𝔻)w holds. The proof relies on the theoretical development in <cit.>; this is a direct consequence of <cit.>. Then, we define a shortcut Bellman operator 𝔻w for 𝔻 as the standard Bellman operator for (𝔻); we introduced standard Bellman operators in <ref>. [shortcut Bellman operator] Let 𝔻 be a string diagram, and w (w_j)_j∈𝔻 be a weight. The shortcut Bellman operator 𝔻w is the (standard but weighted) Bellman operator (𝔻)𝔻w on (𝔻). Note that the exits in (𝔻) are 𝔻. §.§ Correctness of Opt-GSC We first recall the principle behind OVI (it is also closely related to the Knaster–Tarski theorem). [Park induction principle <cit.>] Let V be a complete lattice, and Φ V→ V be monotone. For any v∈ V, if Φ(v) ≤ v, then v is an upper bound of the least fixed point Φ, that is, Φ≤ v. Given an under-approximation l and an error bound ϵ, OVI and CVI with Opt-GSC construct a candidate u of an over-approximation that satisfies l - u≤ϵ, and check the inequality Φ(u) ≤ u: if yes, they terminate and conclude that l is an under-approximation that satisfies the error bound ϵ. Secondly, we inductively characterize the shortcut Bellman operator 𝔻w. The following lemma captures the base case Aw; this corresponds to a local VI in line <ref> of <ref>. Note that the set ^A (= A⊎A) is the set of states on (A) excluding the sink state ⋆. Let A be an oMDP, and w (w_j)_j∈^A be a weight. For each f∈^^A, it holds that: Aw(f)(s) = Aws if s∈^A, w_s if s ∈^A. By Defs <ref> and <ref>, for any i∈^A, the following equation holds: Aw(f)(i) = max_σ∈A∑_j∈^AA,σij· w_j. The RHS coincides with Awi because of the existence of optimal DM schedulers in A wrt. w by Prop. <ref> and Def. <ref>. For the compositions, we use the following characterisations: this corresponds to the propagation (line <ref> in <ref>). Let ∗∈{, ⊕}, 𝔻𝔼∗𝔽 be a string diagram, and w (w_j)_j∈𝔻 be a weight. For each f∈^𝔻⊎𝔻, it holds that: 𝔻w(f)(s) = 𝔼w^𝔼_∗(f^𝔼_∗)(s) if s∈𝔼⊎𝔼, 𝔽w^𝔽_∗(f^𝔽_∗)(s) if s∈𝔽⊎𝔽, where weights w^𝔼_ (w^𝔼_, j)_j∈𝔼 and w^𝔽_ (w^𝔽_, j)_j∈𝔽 are given by w^𝔼_, j w_j if j∈^𝔼, f(i^𝔽) if j= i^𝔼 for i, w^𝔽_, j w_j if j∈^𝔽, f(i^𝔼) if j= i^𝔽 for i, and weights w^𝔼_⊕, w^𝔽_⊕, and values f^𝔼_∗𝔼⊎𝔼→ , f^𝔽_∗𝔽⊎𝔽→ are canonical restrictions. Let ∗ =; the case ∗ = ⊕ is easy. The only subtle thing is whether the equation holds for the states s that have probabilistic transitions to the states j∈^𝔼⊎^𝔽. This is indeed true because the LHS is based on the value f(j), and the RHS correctly includes them in the weights w^𝔼_, j and w^𝔽_, j. Due to the characterisations (Prop. <ref> and Lems. <ref> and <ref>), we can define CVI with Opt-GSC as the standard OVI on the shortcut oMDP (𝔻) of 𝔻 except using under-approximations for local computations (for under-approximations): this corresponds to replace the exact solution with an under-approximation in Lem. <ref>. This is necessarily because of local VIs and Pareto caching. In order to apply the Park induction principle for a candidate of an over-approximation, we locally use exact solutions. [on-demand computation] By Lems. <ref> and <ref>, we can see that we do not have to explicitly construct the shortcut oMDP (𝔻) for running VIs. For each given weight w on component A, we only have to solve the weighted reachability probability problem on A wrt. w for running VIs on the shortcut oMDP. Finally, we prove the ϵ-soundness of CVI with Opt-GSC: [Proof of Thm. <ref> with Opt-GSC] Let 𝔻 be a string diagram. Prop. <ref> and Lems. <ref> and <ref> show that CVI with exact local solutions on 𝔻 can be characterised as the standard VI on the shortcut oMDP (𝔻). Since the shortcut Bellman operator is monotone, the value g in <ref> is always a guaranteed under-approximation even when we replace exact local solutions with under-approximations, which CVI actually does. Since the computations for over-approximations are exact (due to the exact local solutions for over-approximations), we can directly apply the Park induction principle for the shortcut Bellman operator, and we can conclude that l satisfies the desired result when CVI with Opt-GSC terminates. §.§ Correctness of BU-GSC [Proof of Thm. <ref> with BU-GSC] It suffices to show that the obtained U from a Pareto cache is an over-approximation of the global Pareto curve. This is indeed true because of <cit.>.
http://arxiv.org/abs/2405.09946v1
20240516095141
On the logical structure of some maximality and well-foundedness principles equivalent to choice principles
[ "Hugo Herbelin" ]
cs.LO
[ "cs.LO", "math.LO" ]
On the logical structure of some maximality and well-foundedness principles equivalent to choice principles Yiheng Wang10000-0002-5975-2333 Yu Peng10000-0001-6371-9862 Zhe Lin()0009-0000-0575-49422 May 20, 2024 =========================================================================================================== We study the logical structure of Teichmüller-Tukey lemma, a maximality principle equivalent to the axiom of choice and show that it corresponds to the generalisation to arbitrary cardinals of update induction, a well-foundedness principle from constructive mathematics classically equivalent to the axiom of dependent choice. From there, we state general forms of maximality and well-foundedness principles equivalent to the axiom of choice, including a variant of Zorn's lemma. A comparison with the general class of choice and bar induction principles given by Brede and the first author is initiated. § INTRODUCTION §.§ Context The axiom of choice is independent of Zermelo-Fraenkel set theory and equivalent to many other formulations <cit.>, the most famous ones being Zorn's lemma, a maximality statement, and Zermelo's theorem, a well-ordering thus also well-foundedness theorem, since well-foundedness and well-ordering are logically dual notions. In the family of maximality theorems equivalent to the axiom of choice one statement happens to be particularly concise and general, it is Teichmüller-Tukey lemma, that states that every non-empty collection of finite character, that is, characterised only by its finite sets, has a maximal element with respect to inclusion. The axiom of dependent choice is a strict consequence of the axiom of choice. In the context of constructive mathematics, various statements classically but non intuitionistically equivalent to the axiom of dependent choice are considered, such as bar induction, open induction <cit.>, or, more recently, update induction <cit.>, the last two relying on a notion of open predicate over functions of countable support expressing that the predicate depends only on finite approximations of the function. In a first part of the paper, we reason intuitionistically and show that the notion of finite character, when specialised to countable sets, is dual to the notion of open predicate, or, alternatively, that the notion of open predicate, when generalised to arbitrary cardinals is dual to the notion of finite character. As a consequence, we establish that update induction and the specialisation of Teichmüller-Tukey lemma to countable sets are logically dual statements, or, alternatively, that Teichmüller-Tukey lemma and the generalisation of update induction to arbitrary cardinals are logically dual. In a second part of the paper, we show how Teichmüller-Tukey lemma and Zorn's lemma can be seen as mutual instances the one of the other. Finally, in a third part, we introduce a slight variant of Teichmüller-Tukey lemma referring to functions rather than sets and make some connections with the classification of choice and bar induction principles studied by Brede and the first author in <cit.>. The ideas of Section 2 have been developed during an undergraduate internship of the second author under the supervision of the first author in 2022, leading to the idea in Section 4.1 of introducing by the second author. Section 3 contains extra investigations made in 2023 by the second author. Section 4.2 contains investigations made jointly in 2024 by the authors. §.§ The logical system In this section we describe the logical setting and give definitions that are used throughout the article. The results we prove do not depend greatly on its structure as they require only basic constructions, we shall make precise exactly was is necessary and what is left to the preferences of the reader. We work in an intuitionistic higher order arithmetic equipped with inductive types like the type with one element (, 0 :), the type of Boolean values (, 0_, 1_:), the type of natural numbers (), the product type (A × B), or the coproduct type (A+B). In particular, we write B_ for the coproduct of B and of , identifying b : B with (b) : B_ and with (0) where and are the two injections of the coproduct. We write for the type of propositions. For all types A, the type PART: (A) denotes the type A →, we shall sometimes refer to it as “subsets of A”. We also use the type → A_, shortly A_^, to represent the countable subsets of A, implicitly referring to the non- elements of the image of the function[For inhabited A, this is intuitionistically equivalent to considering → A.]. We also require a type for lists: for all types A we denote by A^* the type of lists of terms of type A defined as follows: [center=false] 1ε : A^* [center=false] u : A^* a : A1u@a : A^* We inductively define ⋆ : A^* → A^* → A^*, the concatenation of two lists: [center=false] u : A^*1u ⋆ε := u [center=false] u : A^* v : A^* a : A1u ⋆ (v@a) := (u ⋆ v)@a We denote by [a_1, …, a_n] the list (…(ε @ a_1) @ … ) @ a_n), since ⋆ is associative we drop the parentheses. If n∈ and α:A^, we write α_|n for the recursively defined list [α(0), …, α(n-1)]. We define ∈ : A → A^* → as: a ∈ u := ∃ v,w^A^*, v ⋆ [a] ⋆ w = u. The symbol ∈ will be used as defined above and also as a notation for P(a). To be more precise, for all types A, P : PART: (A) and a : A we will write a ∈ P for P(a) and a ∉ P for P(a). Continuing with the set-like notations, for P, Q : PART: (A) we write P ⊆ Q for ∀a ^A, a ∈ P a ∈ Q. We require extensional equality for predicates: for all P,Q : PART: (A), P = Q P ⊆ Q Q ⊆ P[Extensionality for predicates is assumed for convenience, it is not fundamentally needed]. The symbol ⊆ will also be used for lists: for all u, v : A^*, u ⊆ v := ∀a^A, a ∈ u a ∈ v. Note that equipped with this relation, lists behave more like finite sets than lists. Nevertheless the list structure is not superfluous as will be shown later. As a convention, we let the scope of quantifiers spans until the end of the sentence, so, for instance, ∀ n, P Q reads as ∀ n, (P Q) and similarly for ∃. §.§ Closure operators and partial functions Let us now define some closure operators and relations on subsets and lists: Let A be a type, u: A^*, α : PART: (A), T: PART: (A^*), P : PART: ( PART: (A)) u ⊂α : T : PART: ( PART: (A)) u ⊂α := ∀a^A, a ∈ u → a ∈α T := λα^ PART: (A). ∀u^A^*, u ⊂α→ u ∈ T T : PART: ( PART: (A)) T := λα^ PART: (A). ∃u^A^*, u ⊂α u ∈ T û : PART: (A) P : PART: (A^*) û := λ x^A. x ∈ u P := λ u^A^*. û∈ P The symbol is the translation from “the list world” to “the predicate world”. More precisely, û is the canonical way to see a list as a predicate (u ⊂αû⊆α) and T is an extension of T as a predicate on subsets, α : PART: (A) is in T if and only if it can be arbitrarily approximated by lists of T. Dually, is the translation from predicate to list, taking predicate of finite domain to all lists of elements in the domain. Note that T is downward closed, that is, α⊂β and β∈T implies α∈T. Note also that P is a downward closure operator, defining the largest downward closed subset of P. On its side, T builds the downward closure up to permutation and replication of the elements of the lists of T. Also, symmetrical properties applies to exchanging downward with upward and largest subset with smallest superset. Finally, notice that T may be empty, in fact T is inhabited if and only if ε∈ T, and the same for T. *Examples: Consider T : PART: (^*), for simplicity let us use set-like notations when defining T. If T := [1_,0_], [1_], [0_], ϵ then T will contain all subsets of . Now, if T := [1_,0_], [1_], [0_], T will be empty since for all α : PART: (), ϵ⊂α but ϵ∉ T. If T := ϵ, [1_], [1_, 0_] then T will contain only the empty subset and the singleton containing 1_. Now consider T' := ϵ, [1_], [1_, 1_], [0_, 1_], [1_, 0_, 1_, 1_], notice that T = T'. The operation does not care for duplications or permutations. For T := ϵ, [1_],[1_, 0_], T is ϵ, [1_], [1_, 1_], [1_, 1_, 1_], …. Similarly, for T := ϵ, [1_], [0_], [1_, 0_], T is the set of all lists on . The operator has the dual behaviour. Consider T : PART: (^*), T := [1] then, T contains exactly all subsets of containing 1. Similarly if ϵ∈ T, then T contains all subsets of . For T := [1], T will contain every list on that contains at least one 1. We also give similar definitions relatively to countable subsets, abbreviating (A_)^* into A_^*: Let A be a type, u: A_^*, α : A_^ and T: PART: (A_^*) u ⊂_α : T : PART: (A_^) u ⊂_α := ∃n^, u = α_|n T := λα^A_^. ∀u^A^*, u ⊂_α→ u ∈ T T : PART: (A_^) T := λα^A_^. ∃u^A^*, u ⊂_α u ∈ T We conclude this section defining two different notions of partial functions: Let A,B be types, a relational partial function f from A to B is a relational functional relation of PART: (A × B). Formally, a relational partial function from A to B is a term f : PART: (A × B) such that ∀ a^A, ∀ b,b'^B, ((a,b) ∈ f (a,b') ∈ f) b = b'. Its domain is defined by: (f) : PART: (A) (f) := λ a^A. ∃ b^B, (a,b) ∈ f For all a' : A, we denote by (f) ∪ a' the predicate λ a^A. (∃ b^B, (a,b) ∈ f ) a = a'. Let A,B be types, a decidable partial function f from A to B is a total function f : A → B_. Its domain and graph are defined by: (f) : PART: (A) (f) : PART: (A × B) (f) := λ a^A. f(a) ≠ (f) := λ (a,b)^A × B. f(a) = (b) For all a' : A, we denote by (f) ∪ a' the predicate λ a^A. f(a) ≠ a = a'. *Notation: We write f ∈ A B to denote that f is a relational partial function from A to B and f : A → B_ for the type of decidable partial functions from A to B. We will also write Θ f^A B, P for Θ f^ PART: (A × B), (f ∈ A B) P for Θ∈λ, ∀, ∃. The difference between these two definitions is in the decidability of the domain: decidable partial functions have a decidable domain while it's not the case of relational partial functions. The graph operation allows us to recover a relational partial function from a decidable partial function. One needs excluded middle to recover a decidable partial function from a relation partial function, hence decidable partial functions are stronger axiomatically. Notice that we used the same notation in both definitions. Since they both have the same semantic meaning and we will make clear whether we are using relation partial function or decidable partial function, it should not cause any confusion. § AND In this section, we define Teichmüller-Tukey lemma and update induction and emphasise that they are logically dual, up to the difference that the former is relative to predicates over subsets of arbitrary cardinals while update induction is relative to predicates over countable subsets. Underneath, they rely on the dual notions of predicate of finite character and of open predicate. §.§ Predicates of finite character A set is of finite character if all its information is contained in its finite elements. In our setting, a predicate P : PART: ( PART: (A)) is of finite character if all its information is contained in a predicate over lists. There are two canonical ways to express this: Let A be a type and P : PART: ( PART: (A)). We propose two definitions of finite character: P ∈_1 := ∀α^ PART: (A), α∈ P ∀u^A^*, u ⊂α→ u ∈P P ∈_2 := ∃T^ PART: (A^*), T = P Rewriting _1 using the terms just defined: P ∈_1 := P = P _1 and _2 are, in essence, paraphrases of one an other, thus there is no reason not to expect them to be equivalent. First we will need a lemma: Let A be a type and T : PART: (A^*) then T∈_1. Let α : PART: (A). Suppose α∈T, our goal is to show that α∈T. Let u : A^* such that u ⊂α, we will show that u ∈T. By definition u ∈T if and only if û∈T if and only if every sublist of u is in T. Since α can be arbitrarily approximated by terms of T and u ⊂α, so can u. Hence, u ∈T thus, α∈T. Suppose α∈T, then for all u : A^* such that u ⊂α, u ∈T which we can rewrite as û∈T. We easily show that û∈T u ∈ T thus α∈T. We have shown that T = T. This means that without loss of generality, we can require in _2 that the witness T be of the form T' for some T'. This is a way to express that T can be chosen to be minimal. In fact if we are given P and T such as in _2, it may happen that T contains a list u that is not closed under ⊆ (i.e.. v ⊆ u v ∈ T). Such an u will be invisible when looking at T, hence we can consider u as a superfluous term. The operation allows us, without loss of generality, to remove those terms, thus making T minimal. _1 _2 Let A be a type and P : PART: ( PART: (A)). From left to right: suppose P ∈_1. P is a witness of P ∈_2. From right to left: suppose P ∈_2, let T be the witness of P ∈_2. By lemma <ref> T = T and by hypothesis P = T, we can rewrite the first equality as P = P. Since _1 and _2 are equivalent, we will from now on write without the indices. §.§ Open predicates A notion of open predicates over functions of countable domain was defined in Coquand <cit.> and generalised by Berger <cit.>. Using the definitions of Section <ref>, a predicate over α:A^ is open in the sense of Berger if it has the form α∈Tα∈U for some T, U : PART: (A^*). In order to get a closer correspondence with the notion of finite character, we will however stick to Coquand's definition. Additionally, to get a closer correspondence with the case of open predicates used in update induction, we consider open predicates for functions to A_. Let A be a type and P : PART: (A_^). We define: P ∈_ := ∃ T^ PART: (A_^*), T = P The observations made on predicates of finite character apply to countably-open predicates, namely that T = T. Obviously, we can also move from A_^ to PART: (A) and introduce a general notion of open predicates which again, will satisfy T = T: Let A be a type and P : PART: ( PART: (A)). We define: P ∈ := ∃ T^ PART: (A^*), T = P Conversely, we can define a notion of predicate of countably-finite character dual the notion of countably-open predicate: Let A be a type and P : PART: (A_^). We define: P ∈_ := ∃ T^ PART: (A_^*), T = P This finally results in the following dualities: §.§ Teichmüller-Tukey lemma and Update induction Before defining Teichmüller-Tukey lemma we need a few definitions: Let A be a type, P : PART: ( PART: (A)) and α, β : PART: (A). We define: β≺α : β≺α := ∃a^A, a ∉αβ = (λ x^A. x ∈α x = a) α∈_≺(P) : α∈_≺(P) := α∈ P ∀β^ PART: (A), β≺αβ∉ P Thus, β≺α stands for β extends α (if β is an update of α) while _≺(P) is the predicate of maximal elements of (P, ≻) (≻ is the reverse of ≺). What we are interested in are predicates of finite character but Theorem <ref> allows us to consider only predicates on lists since there is a correspondence between them. Hence, we will quantify or instantiate schemas on predicate on lists. Let A be a type and T: PART: (A^*), we define the Teichmüller-Tukey lemma, _AT: (∃α^ PART: (A), α∈T) ∃α^ PART: (A), α∈_≺(T) *Notations: denotes the full schema: for all types A and all T : PART: (A^*), ∃α^ PART: (A), α∈T∃α^ PART: (A), α∈_≺(T). _AT denotes the schema specialised in this A and this T. _AT denotes the restriction of the full schema to A and T of a particular shape. For example: _ T is the schema: for all T : PART: (^*), ∃α^ PART: (), α∈T∃α^ PART: (), α∈_≺(T). Moreover, if C_A denotes a particular collection of predicates over lists of A (A is a parameter), then _A C_A denotes the restrictions of the schema to any A type and T : PART: (A^*) that is in C_A. Following an earlier remark, we impose that the finite character predicate we are considering must be inhabited, without this becomes trivially inconsistent. Having defined we now show that we can recover an induction principle by using contraposition and Morgan's rules: Unfolding some definitions, _AT is (∃α^ PART: (A), α∈T) ∃α^ PART: (A), α∈T (∀β^ PART: (A), β≺αβ∉T) Contraposing and pushing some negations: ∀α^ PART: (A), [ (α∈T) ∀β^ PART: (A), β≺αβ∉T ] ∀α^ PART: (A), α∉T We have a sub-formula of the form A B, we have the choice of writing it as A B or B A. The first choice leads to a principle we will call ^co_AT: ∀α^ PART: (A), [ α∈T∃β^ PART: (A), β≺αβ∈T ] ∀α^ PART: (A), α∉T And the second choice leads to an induction principle: ∀α^ PART: (A), [( ∀β^ PART: (A), β≺αβ∉T) α∉T ] ∀α^ PART: (A), α∉T ^co is intuitively an opposite formulation of . The induction principle we obtain seems to express something different. We can push further the negations in order to obtain a positive formulation of it: ∀α^ PART: (A), [( ∀β^ PART: (A), β≺αβ∈T) α∈T ] ∀α^ PART: (A), α∈T And this can be seen as as a generalisation of Berger's update induction <cit.> going from countably-open predicates to arbitrary open predicates. To state update induction, we need to focus on partial functions from to A which we equip with an order: Let A be a type, P : PART: (A_^) and α, β : A_^. We define: β≺_N α : β≺_N α := ∃m^, ∃a^A, α(m) = β(m) = a ∀ n^, n ≠ m α(n) = β(n) Like , update induction is originally defined on open predicates but since any open predicate comes from a predicate on lists, we can more primitively state it as follows: Let A be a type and T: PART: (A_^*), we define Update induction, _AT: ∀α^A_^, [( ∀β^A_^, β≺_N αβ∈T) α∈T ] ∀α^A_^, α∈T Contrastingly, we now formally state the dual of that we obtained above: Let A be a type and T: PART: (A^*), we define Generalised update induction, _AT: ∀α^ PART: (A), [( ∀β^ PART: (A), β≺αβ∈T) α∈T ] ∀α^ PART: (A), α∈T Also, we introduce a countable version of , logically dual to : Let A be a type and T: PART: (A_^*), we define the countable Teichmüller-Tukey lemma, _AT: (∃α^A_^, α∈T) ∃α^A_^, α∈_≺_(T) We thus obtain the following table: In particular, since is classically equivalent to the full axiom of choice, is also classically equivalent to the full axiom of choice. § AND ZORN'S LEMMA In this section we analyse precisely the relationships of with Zorn's lemma. We go further than showing their equivalence, we look at which part of (as a schema) is necessary to prove Zorn's lemma and reciprocally. This equivalence result is also a proof that the version of Teichmüller-Tukey lemma we defined captures the full choice. Let A be a type, < a strict order on A, a : A and E, F : PART: (A). Define: E ∈(A) : E ∈(A) := ∀a,b^A, a, b ∈ E (a < b b < a a = b) F ∈(E) : F ∈(E) : F ⊆ E F ∈(A) E ∈(A) : E ∈(A) := (∀F^ PART: (A), F ∈(E) ∃a^A, a ∈ E ∀b^A, b ∈ F b ≤ a) a ∈_<(E) : a ∈_<(E) := a ∈ E ∀b^A, a < b b ∉ E Where ≤ is the reflexive closure of <. is the chain predicate, is the subchain predicate, is the inductive “subset” predicate and _< is simply the maximal element predicate. We choose to express these definitions in terms of predicates over types rather than directly in terms of types, to avoid discussions on proof relevance and stay in a more general setting. If we were proof-irrelevant, instantiating our schemas on predicates over types would be identical to doing it directly on types which would simplify notations and yield the same results. We can now define concisely Zorn's lemma: Let A be a type, < a strict order on A, and E a predicate on A. _A<E is the following statement E ∈∃a^A, a ∈_<(E) The following is an adaptation of a usual set-theoretic proof in our setting. From left to right: fix A a type, < a strict order on A and E : PART: (A) such that E ∈(A). We first show that (E) is of finite character: Let F : PART: (A) such that F ∈(E), we show F ∈(E): let u : A^* such that u ⊂ F, û is thus a chain of E therefore u ∈(E). Let F : PART: (A) such that F ∈(E), by choosing lists of length 2 we can show that F is a subchain of E. Hence (E) ∈. Using _A (E), we get G : PART: (A) such that G ∈((E)). G is a subchain of E, since E is inductive we get g : A such that g ∈ E and ∀aA, a ∈ G a < g. Suppose we have h : A such that g < h and h ∈ E . Let G' := λ a^A. a ∈ G a = h, then we have G' ≺ G, since G ∈((E)), G' ∉(E). On the other side, G' is obviously a chain and G' ⊆ E, therefore G' ∈(E). This is a contradiction, hence g ∈_<(E). From right to left: let T : PART: (A^*). ⊂ is a strict order on PART: ( PART: (A)). Since T is of finite character, a maximal element for ⊂ is also a maximal element for ≻. Hence, what is left to show is that T is inductive and use _ PART: (A) ⊂T to produce a maximal term. Let Q : PART: ( PART: (A)) such that Q ∈(T). Let α := λ a^A. ∃β^ PART: (A), β∈ Q a ∈β. By construction, α is an upper bound of Q, let's show that it is indeed in T. Since T is of finite character it suffices to show that for all u : A^*, u ⊂α u ∈ T. Let u : A^* such that u ⊂α. Since u is a finite list, we can enumerate its elements a_0, …, a_n. For all 0 ≤ i ≤ n, let β_i : PART: (A) be such that a_i ∈β_i and β_i ∈ Q. Since Q is chain, there exists 0 ≤ i_0 ≤ n such that for all 0 ≤ i ≤ n, β_i ⊆β_i_0. Thus, u ⊂β_i_0, β_i_0∈T and so u ∈T. Looking more closely at this proof we notice that we have proved a finer result than simply the equivalence. We have shown _A(E)_A<E and _ PART: (A) ⊂T_AT. We can express for a predicate over lists to be of the form (E) in a more syntactic way. Let A be a type and T : PART: (A^*), we say that T is a list of chains, if there exists T' such that: * ϵ∈ T' * u@a ∈ T' and [a] ⋆ v ∈ T' if and only if u ⋆ [a] ⋆ v ∈ T' * u ⋆ [a] ⋆ v ∈ T' implies u ⋆ v ∈ T' * if a ≠ b and u ⋆ [a] ⋆ v ⋆ [b] ⋆ w ∈ T' then for all u', v', w' : A^*, u' ⋆ [b] ⋆ v' ⋆ [a] ⋆ w' ∉ T' and T is the downward closure of T' by ⊆. We denote by _A the collection of lists of chains of A. Let A be a type, < a strict order on A and E : PART: (A), then there exists T ∈_A such that (E) = T. Reciprocally, let A be a type, then for every T ∈_A there exist a strict order < on A and E : PART: (A) such that (E) = T. Proof of the first statement: we inductively define a T' : PART: (A^*). [center=false] 1ε∈ T' [center=false] a ∈ E1[a] ∈ T' [center=false] b ∈ E a < b u@a ∈ T'1u@a@b ∈ T' We easily show that T' satisfies the conditions of the above definition. Let T be the downward closure of T'. Let F ∈(E) and u : A^* such that u ⊂ F. Since F is a chain we can construct a list u' of all elements of u such that u' does not contain twice the same element and is ordered increasingly relative to <. u' is thus in T' hence u is in T. Let F ∈T and a,b : A such that a,b ∈ F. By hypothesis the list [a,b] is in T. There exists u ∈ T' such that [a,b] ⊂ u. Hence a,b ∈u which is a chain. In conclusion F is a subchain of E. Proof of the reciprocal: suppose given a type A with decidable equality and T ∈_A. There exists a T' satisfying the aforementioned conditions. Let E := λ a^A. ∃u^A^*, u ∈ T' a ∈ u. We now must define an ordering on A. Define < a binary relation on A such that a < b := [a,b] ∈ T'. Using last "axiom" of the definition of T' we easily show that it is irreflexive. For transitivity notice that if [a,b],[b,c] ∈ T' then [a,b,c] ∈ T' then [a,c] ∈ T'. Thus, it is a strict ordering on A. Let F ∈(E) and u : A^* such that u ⊂ F. We can assume that u is sorted increasingly relatively to <. Using the same trick used for proving transitivity show that u ∈ T. Let F ∈T and a,b : A such that a,b ∈ F. By hypothesis the list [a,b] is in T therefore, a < b which means that F is indeed a chain. _A_A and _ PART: (A) ⊂T. Hence we deduce the somewhat surprising results _A_A and _ PART: (A) ⊂T. Looking back at the path we took to arrive at this conclusion, the results are quite expected, but looking only at the definition of a list of chains it is quite surprising that restricting this much does not change its power. § In this section we define a choice principle which stands for “Exists a Maximal Partial Choice Function” and a weaker version . It is weaker in the sense that implies but the equivalence is true if we allow excluded middle. We show that is equivalent in its general form to and link to the general class of dependent choice , given by Brede and the first author in <cit.>. In particular, and can be seen as refinements of whose strength is more explicitly controlled. Let A,B be types, f,g ∈ A B and P : PART: ( PART: (A × B)), define: g ≺ f : g ≺ f : ∃a^A, a ∉(f) ((g) = (f) ∪ a ) (∀ x^A, x ∈(f) ∃ b^B, (x,b) ∈ f (x,b) ∈ g) f ∈ (P) : f ∈ (P) := f ∈ P ∀g^A B, g ≺ f g ∉ P Let A, B be types and T : PART: ((A × B)^*), _ABT is the statement: (∃α^ PART: ( A × B), α∈T) ∃f^A B, f ∈(T) Let A, B be types, f,g : A → B_ and P : PART: ( PART: (A × B)), define: g ≺ f : g ≺ f : ∃a^A, a ∉(f) ((g) = (f) ∪ a) (∀ x^A, x ∈(f) f(x) = g(x)) f ∈ (P) : f ∈ (P) := (f) ∈ P ∀g^A → B_, g ≺ f (g) ∉ P Since the intuitive meaning is the same we use the symbol ≺ for predicate, for relational partial functions and decidable partial function. Let A, B be types and T : PART: ((A × B)^*), the theorem of existence of a maximal partial choice function _ABT is the statement: (∃α^ PART: ( A × B), α∈T) ∃f^A → B_, f ∈(T) The difference between and lies solely in the "kind" of partial function that is used. Hence, as per the above remark on the differences between relation partial function and decidable partial function, and assuming excluded middle which we denote by . §.§ and Now that we have defined , we show that it is equivalent to hence, and . Let A be a type, T : PART: (A^*) and π^*T the operation that maps T to λ u^(A ×)^*. π(u) ∈ T where π is the canonical projection of (A ×)^* on A^*. Then, _Aπ^*T_AT. Let A, B be types and T : PART: ((A × B)^*) then, _(A × B) T_ABT. _Aπ^*T_AT: let A a type, T : PART: (A^*) and π^*T := λ u^(A ×)^*. π(u) ∈ T. From _Aπ^*T we obtain f ∈ A such that f ∈(π^*T). Define α := (f) and let's show that α∈(T). By construction, α is in T. Suppose β : PART: (A × B) such that β≺α. We can construct a relational partial function g : A such that β = (g). Since g ≺ f, g is not in U hence β is not in T. _(A × B) T_ABT: let A,B types and T : PART: (( A × B)^*). Define Q := λ u^(A × B)^*. (∀a^A, ∀b,b'^B, (a,b) ∈ u (a,b') ∈ u b = b') u ∈ T Notice that Q is not empty, since T is inhabited, ϵ∈ T. From this, we deduce that ϵ∈ Q hence, the empty predicate is in Q. We can now apply _(A× B) Q and get α such that α∈(Q). By construction α is a relational partial function. It follows that it's a maximal relational partial function, thus proving _ABT. can be seen as a projection of . The fact that they are so tightly linked is not surprising as “being a partial function” for a subset of A × B is a property of finite character. §.§ and Introduced in <cit.>, Generalised Dependent Choice (_ABT) is a common generalisation of the axiom of dependent choice and of the Boolean prime ideal theorem. Parameterised by a domain A, a codomain B and a predicate T : PART: ((A× B)^*), it yields dependent choice when A is countable, the Boolean prime ideal theorem when B is two-valued, and the full axiom of choice when T comes as the “alignment” of some relation (see below). To the difference of , asserts the existence of a total choice function, but this to the extra condition of a property of “approximability” of T by arbitrary long finite approximations. To the difference of whose strength is the one of the full axiom of choice, expecting a total choice function makes inconsistent in its full generality. In this section we investigate how restricting to countable A or two-valued B impacts its strength to exactly the same extent as it restricts the strength of . Two such preliminary results are given, but first, let's translate in our setting: Let A,B be types and T : PART: ((A × B)^*). For all X : PART: ((A × B)^*) define ϕ(X) := λ u^(A × B)^*. ( u ∈T∀a^A, (∃b^B, (a,b) ∈ u) ∃b^B, u@(a,b) ∈ X ) The A-B-approximation of T denoted T_AB ap is the greatest fixed point of ϕ. We say that T is A-B-approximable if ε∈ T_AB ap. Let A,B be types and T : PART: ((A × B)^*). T has an A-B-choice function if: ∃f^A → B, ∀u^(A × B)^*, u ⊂(f) u ∈ T Let A,B be types and T : PART: ((A × B)^*), _ABT is the statement: if T is A-B-approximable then T has an A-B-choice function. _ B T_ BT Let B be a type and T : PART: ((× B)^*). In order to use , T must be -B-approximable but the T we are given might not be. Thus, we are going to construct T_ : PART: ((× B_)^*) that is -B_-approximable and use to obtain a function that we will prove maximal. For all u : PART: ((A × B_)^*) define u inductively: [center=false] 1ε := ε [center=false] a : A b : B 1u@(a, b) := u@(a, b) [center=false] a : A1u@(a, ) := u By induction define T^n_ : PART: ((× B_)^*): * T^0_ := λ u^(× B_)^*. u = ε * Let T^n+1_ be defined inductively [center=false] u ∈ T_n b : B u@(n+1,b) ∈ T1u@(n+1,b) ∈ T^n+1_ [center=false] u ∈ T_n ∀b^B, u@(n+1,b) ∉ T1u@(n+1, ) ∈ T^n+1_ Now define T_ as the ⊆-downward closure of the union of the T^n_. We must show that T_ is -B_-approximable. By definition T_ = T_. Let n :, v : (× B_)^* such that v ∈ T_ and (∃c^B_, (n,c) ∈ v). By definition, there exists m : and u ∈ T^m_ such that v ⊆ u. If n ≤ m then there exists c : B_ such that (n,c) ∈ u, thus v@(n,c) ⊆ u and v@(n,c) ∈ T_. If n > m then there exists u' ∈ T^n_ such that u ⊆ u'. It is in the proof of this statement that we need excluded middle to show that we always satisfy the hypothesis of one of the induction steps. Hence, v ⊆ u' and we now repeat the same argument. T_ satisfies ϕ and contains ε, thus we can apply _ B_ T_ and get f : → B_ a choice function. What is left to show is that f is a maximal partial function. Let n : such that n ∉(f) and let g : → B_ extending f with (g) = (f) ∪ n. Let us write f_<n for the list [(0, f(0)), …, (n-1, f(n-1))]. f_< n∈ T^n_ and since f_< n+1 is of the form f_< n@(n, ) by case analysis we deduce that ∀b^B, f_< n@(n,b) ∉ T. If (g) ∈T_ then g_< n+1∈ T_ and g_< n+1 = f_< n@(n, g(n)) with g(n) : B. f_< n@(n,g(n)) is thus in T, contradiction. Hence, f is maximal. Let's write 𝐃𝐂 for the axiom of dependent choice. We have: Since _ B T is equivalent to 𝐃𝐂 <cit.> we deduce: 𝐃𝐂_(× B) T For A a type with decidable equality, _A T_ A T Let A be a type and T : PART: ((A ×)^*) A--approximable. Define U := T_A ap, the A--approximable hypothesis guarantees that U is inhabited. Using _A U we get f : A →_ a maximal partial choice function. We show that f must be total, that is that it is impossible that it takes the value . Indeed assume f(a) = for some a : A and consider g : A →_ that extends f with g_0(a) = 0_. We have g ≺ f, thus (g) ∉U. Then, there exists u : (A ×)^* such that u ⊂(g) and u ∉ U. Using the decidability of equality in A, we can find u' such that u = u'@(a,0_) where u' ⊂(f). Symmetrically, by considering the extension g of f obtained by setting g(a) = 1_, there exists v' ⊂(f) such that v'@(a,1_) ∉ U. Since u' ⋆ v' ⊂(f), u' ⋆ v' ∈ U. There must be b : such that (u' ⋆ v')@(a,b) ∈ U. But in both cases (b = 0_ or 1_) there is a sublist (u'@(a,0_) or v'@(a,1_)) that is not in U, contradiction. Hence, f is total. The following definition, taken from <cit.>, describes how to turn a relation on A and B as a predicate over (A ×)^* that filters approximations. Let A and B be types and R a relation on A and B. The positive alignment R_⊤ of R is the predicate on (A × B)^* defined by: R_⊤ := λ u.∀ (a,b) ∈ u, R(a,b) Positive alignments can be characterised by the following property. Let A and B be types. We say that T : PART: ((A × B)^*) is downward prime when u ∈ T and v ∈ T implies u ⋆ v ∈ T. We denote by 𝐃_AB the collection of downward prime T : PART: ((A × B)^*). If R is a relation on A and B, its positive alignment is downward prime. Conversely, if T is downward prime, it is the positive alignment of the relation |T| defined by |T|(a,b) := [(a,b)] ∈ T This is because u ⋆ v ∈ R_⊤, that is ∀ (a,b) ∈ u ⋆ v, R(a,b) is equivalent to (∀ (a,b) ∈ u, R(a,b)) (∀ (a,b) ∈ v, R(a,b)), that is to u ∈ R_⊤ v ∈ R_⊤, and, conversely, because u ∈ |T|_⊤, that is ∀ (a,b) ∈ u, [(a,b)] ∈ T, is equivalent, by induction on u, using downward primality at each step, to u ∈ T. Based on the equivalence between _ABR and _ABR_⊤ in <cit.>, we obtain: _ABT for T downward prime characterises the full axiom of choice _ABR, that is ∀ x^A, ∃ y^B, R(a,b) ∃ f^A → B, ∀ x^A, R(a,f(a)). We now show that _ABT is also equivalent to _ABT for T downward prime. For T: PART: ((A ×)^*) downward prime for A with decidable equality, _A B T_A B T. Since T is A-B-approximable, it contains ε, so that T is non-empty. Thus, by _A B T, we get f : A → B_ a maximal partial choice function. We show that f must be total. Indeed, assume a : A such that f(a) =. By A-B-approximability, we can obtain a b such that [(a,b)] ∈T. Let's now consider the function g : A → B_ defined by setting g(a') = b if a = a' and g(a') = f(a') otherwise. We have g ≺ f, thus (g) ∉T. But this contradicts that we can also prove that any u ⊂(g) is in T, that is (g) ∈T. Indeed, by decidability of equality on A, either u has an element of the form (a,b') or not. In the second case, u ⊂(f) and thus u ∈ T. In the first case, u has the form u' ⋆ (a,b') ⋆ u” with u' ∈(f) and u”∈(f), thus u' ∈ T and u”∈ T. Since u ⊂(g), we also have b' = g(a) = b. Then, by downward primality, we get u' ⋆ [(a,b)] ⋆ u”∈ T. For T: PART: ((A × B)^*) downward prime, _A B 𝐃_AB_A B 𝐃_AB. There are two ways to embed a partial function from A to B into a total function: either restrict A to the domain of the function, or extend B into B_, as in Theorem <ref>. We give a proof using the first approach. Let A' be the subset of A such that ∃ b^B, [(a,b)] ∈ T. We show coinductively that if A' is infinite, the restriction of T on A' is A'-B-approximable. First, we do have ε∈ T because T is non empty. Then, assume u ∈ T and a : A' such that (∃b^B, (a,b) ∈ u) (which is possible since A' is supposed infinite). Since a is in A', there is b such that [(a,b)] ∈ T, and by downward primality, u ⋆ (a,b) ∈ T, hence A'-B-approximable by coinduction. Thus, there is a total function f:A' → B such that (f) ∈T, which induces a partial function f' from A B_. It remains to show that f' is maximal. Let a ∉(f), that is such that ∀ b^B, [(a,b)] ∈ T. Then, there is obviously no extension of f' on a that would be in T. It remains to treat the case of A' finite, which can be obtained by (artificially) reasoning on the disjoint sum of A' and , and setting T[(n,p)] := (n=p) on . § CONCLUSION While Brede and the first author <cit.> investigated the general form of a variety of choice and bar induction principles seen as contrapositive principles, this paper initiated the investigation of a general form of maximality and well-foundedness principles equivalent to the axiom of choice. One of the surprise was that, up to logical duality, two principles such as Teichmüller-Tukey lemma and Berger's update induction were actually of the very same nature. By seeing all these principles as schemes, we could also investigate how to express Zorn's lemma and Teichmüller-Tukey lemma as mutual instances the one of the other. Finally, by starting investigating how maximality, when applied to functions, relates to totality in the presence of either a countable domain or a finite codomain, we initiated a bridge between maximality and well-foundedness principles and the general family of choice and bar induction principles from <cit.>. The investigation could be continued in at least five directions: * In the articulation between and : assuming an alternative definition of , say ^+, where PART: (A) is represented as a characteristic function from A to , that is, equivalently, as a function from A to _, one would get the following identifications: [ ^+_AT = _Aπ^*T ^+_(A× B)T = _ABT; _AT = ^-_Aπ^*T _(A× B)T = ^-_ABT; ] * In the articulation between a sequential definition of countably-finite character and countably-open predicate, as in ^N_BT and _BT, and a non-sequential definition, as in _ BT and _ BT, similar to the connection between ^𝑝𝑟𝑜𝑑._BT and _ BT in <cit.>. * In the relation between _A BT and _A BT on one side and _ABT on the other side, verifying that the correspondences between _ BT and _ BT, and between _A T and _A T hold, at least classically, in both directions, the same way as they do in the case T downward prime. * In the articulation between and , formulating statements dual to and - and connecting them to the dual of , that is <cit.>, analysing the role of classical reasoning and decidability of the equality on the domain in the correspondences. * In the relation between , , - and other maximality principles than Zorn's lemma, also studying other well-foundedness principles than . In particular, an advantage of and - over is that their more general form is classically equivalent to the axiom of choice while the most general form of is inconsistent.
http://arxiv.org/abs/2405.09369v1
20240515142037
Diffusion-based Contrastive Learning for Sequential Recommendation
[ "Ziqiang Cui", "Haolun Wu", "Bowei He", "Ji Cheng", "Chen Ma" ]
cs.IR
[ "cs.IR" ]
City University of Hong Kong ziqiang.cui@my.cityu.edu.hk McGill University haolun.wu@mail.mcgill.ca City University of Hong Kong boweihe2-c@my.cityu.edu.hk City University of Hong Kong J.Cheng@my.cityu.edu.hk Corresponding author. City University of Hong Kong chenma@cityu.edu.hk Contrastive learning has been effectively applied to alleviate the data sparsity issue and enhance recommendation performance. The majority of existing methods employ random augmentation to generate augmented views of original sequences. The learning objective then aims to minimize the distance between representations of different views for the same user. However, these random augmentation strategies (e.g., mask or substitution) neglect the semantic consistency of different augmented views for the same user, leading to semantically inconsistent sequences with similar representations. Furthermore, most augmentation methods fail to utilize context information, which is critical for understanding sequence semantics. To address these limitations, we introduce a diffusion-based contrastive learning approach for sequential recommendation. Specifically, given a user sequence, we first select some positions and then leverage context information to guide the generation of alternative items via a guided diffusion model. By repeating this approach, we can get semantically consistent augmented views for the same user, which are used to improve the effectiveness of contrastive learning. To maintain cohesion between the representation spaces of both the diffusion model and the recommendation model, we train the entire framework in an end-to-end fashion with shared item embeddings. Extensive experiments on five benchmark datasets demonstrate the superiority of our proposed method. Diffusion-based Contrastive Learning for Sequential Recommendation Chen Ma Received 2023-09-18; accepted 2024-05-07 ================================================================== § INTRODUCTION Sequential recommendation (SR) systems recommend the next item for users based on their historical interactions, which have demonstrated significant value on various online platforms like YouTube[https://www.youtube.com/] and Amazon[https://www.amazon.com/]. One of the major challenges in sequential recommendation is data sparsity <cit.>. The limited and noisy user interaction records impede the model's ability to learn effective user representations, thereby constraining the performance of SR models. Recently, self-supervised contrastive learning has been employed to address this issue, leading to significant advancements <cit.>. Contrastive learning based SR methods leverage the information from different views to enhance the representation learning, thereby improving the performance of the primary task (i.e., predicting the next item). Specifically, existing methods create augmented views for original sequences and aim to maximize the agreement among the differently augmented views of the same sequence. Some methods construct positive contrastive pairs by performing data-level augmentation <cit.>. This involves using random augmentation techniques such as masking, reordering, and cropping <cit.>, or item correlation-based augmentation methods like informative substitution and insertion <cit.> to generate augmented views of the original user behavior sequences. Furthermore, to reduce the level of disturbance to the original sequence, certain methods <cit.> generate positive augmented views at the model level by performing a forward pass of neural networks on an input sequence twice, each time using a different dropout mask. However, the dropout itself is still a random operation. Recently, certain methods <cit.> have been proposed to integrate both data-level and model-level augmentation, aiming to to extract more expressive features or establish distinct contrastive objectives for varying levels of augmentation. Despite the progress made in contrastive learning based sequential recommendation, existing methods overlook two critical factors: * Semantic Consistency. Most existing methods employ random augmentations either at the data level <cit.> or at the model level <cit.> to generate diverse augmented views of the same user sequence. These augmented views are then encouraged to exhibit similarity in the representation space. However, it is crucial to investigate the rationality of the generated positive samples. In fact, random augmentation may compromise semantic consistency and result in inaccurate positive samples. To illustrate, Figure <ref> presents an original user sequence and two augmented views obtained by applying different random mask operations to it. It is evident that these two augmented sequences exhibit significant semantic discrepancies: the first augmented sequence excludes user interaction records related to sports, while the second one overlooks the user's clothing preferences. Maximizing the agreement of representations between two semantically inconsistent sequences can cause the model to disregard significant semantic differences among the sequences, resulting in suboptimal solutions. This issue becomes more pronounced when the user's interaction records are inherently sparse. Therefore, we argue that effective contrastive learning necessitates positive pairs with consistent semantics. * Context Information. Context information plays a crucial role in generating augmented views of user behavior sequences since each interaction demonstrates strong correlations with both preceding and subsequent interactions within the sequence. By harnessing context information, we can capture the sequential dependencies among the user's historical behaviors, thereby offering substantial guidance in identifying suitable replacements for individual items within the original sequence. However, existing methods do not fully leverage the context information during the generation of augmented views. Some researchers <cit.> consider global item correlations for item substitution. However, a correlated item may not necessarily align with the context information and sequential dependency, which are crucial aspects in sequential recommendation. Some methods <cit.> solely focus on the unidirectional sequential dependency. However, in self-supervised learning, we can obtain context information from both preceding and subsequent items to better guide the data augmentation. To address these limitations, a natural approach is to generate augmented sequences that are semantically consistent for each user, leveraging the context information. More specifically, if we can replace specific items at particular positions within a user behavior sequence based on their context information, while maintaining the conditional distribution of those positions, such an augmented sequence represents a desirable “consistent yet distinct” positive contrast view. However, generating data that maintains consistency with the original context-conditioned distribution is exceedingly challenging due to the highly complex nature of user sequence distributions. Inspired by the remarkable capability of diffusion models <cit.> in learning underlying data distributions, we propose a novel method named Diffusion-based Contrastive Learning for Sequential Recommendation (DiffCLRec). DiffCLRec utilizes the guided diffusion model to generate semantic-consistent augmented samples for better contrastive learning. More specifically, we employ a diffusion model to estimate the conditional probability of items at randomly selected positions, leveraging their context information within the sequence. The preceding and subsequent items at each selected position act as the condition, guiding the diffusion model to generate alternative items that align with the context information. These generated items are then used to replace the original items at the corresponding positions. This approach enables us to obtain semantically consistent augmented sequences with respect to the original sequence, thereby enhancing the effectiveness of contrastive learning. Moreover, to align the embedding space of the diffusion models with that of the SR model, we propose an end-to-end learning framework where the diffusion model and the SR model are jointly trained, sharing the item embeddings. Our main contributions are summarized as follows: * We propose a context-guided diffusion model to generate semantic-consistent augmented sequences with respective to original sequences for contrastive learning. * We propose an end-to-end learning framework to align the representation space of the diffusion model and the SR model, where these two models are jointly trained and share item embeddings. * We conduct extensive experiments on three public benchmark datasets, and the results demonstrate the superiority of our method. § RELATED WORK In this section, we summarize the related works from the following three fields: (1) sequential recommendation, (2) self-supervised contrastive learning, and (3) diffusion models. §.§ Sequential Recommendation Sequential recommendation aims to model a user's preference based on their historical interactions. In the initial phase, researchers treated the evolution of user interests as a Markov process and employed Markov chains to predict the next item for each user <cit.>. With the rapid advancements in deep learning, various techniques such as convolutional neural networks (CNN) and recurrent neural networks (RNN) have been utilized in sequential recommendation <cit.>, leading to remarkable achievements. Subsequently, the introduction of the attention mechanism has significantly enhanced recommendation performance. SASRec <cit.>, for instance, is the pioneering work that employs the self-attention mechanism to model the evolution of user preference. Following that, BERT4Rec <cit.> is proposed to use a bidirectional self-attention encoder to capture context information of the user sequence. Recently, many self-attention-based methods have made improvements to existing approaches, achieving notable progress. §.§ Self-Supervised Contrastive Learning Self-supervised learning is widely used to tackle challenges associated with data sparsity and noise. It improves representation learning by constructing informative supervisory signals from the unlabeled data itself. Self-supervised learning has been extensively applied in various domains, such as computer vision (CV)  <cit.> and natural language processing (NLP)  <cit.>. Due to the inherent issues of user behavior sparsity and noisy interaction records in recommendation scenarios, self-supervised contrastive learning has played a crucial role in multiple recommendation tasks  <cit.>. When it comes to sequential recommendation, researchers design informative contrastive learning objectives for learning better user representations from historical interactions. S^3-Rec <cit.> introduces a method that incorporates auxiliary self-supervised objectives to learn the correlations among items, attributes, and segments. CL4SRec <cit.> designs three data-level augmentation operators, namely crop, mask, and reorder, which are employed to generate positive pairs and promote the invariance of their representations. However, introducing random perturbations to the already sparse interaction records of a user can alter her original preference, and maximizing the agreements among semantically inconsistent sequences can lead the model to obtain suboptimal solutions. To solve this issue, CoSeRec <cit.> suggests substituting a specific item in the sequence with a similar item. However, the item similarity is measured by simple co-occurrence counts or item embedding distance, neglecting the context information of user behaviors. Later, DuoRec <cit.> proposes a model-level augment strategy, which generates positive augment pairs by forward-passing an input sequence twice with different Dropout masks. However, this approach is also a kind of random augment at the model level, lacking the ability to maintain semantic consistency. In addition, ICLRec <cit.> attempts to extract user intent from sequential information and subsequently performs contrastive learning between user representations and intent representations. ECL-SR <cit.> designs different contrastive learning objectives for augmented views at different levels. MCLRec <cit.> further combines data-level and model-level augmentation strategies, which applies random data augmentation proposed by CL4SRec to the input sequence and then feed the augmented data into MLP layers for the model-level augment. However, the design intentions of these methods do not reflect the constraints on semantic consistency in the augmented views, which can potentially lead to the generation of incorrect positive samples. In addition, they do not take into account context information, which is important for preserving the semantic consistency. §.§ Diffusion Models Diffusion Models have gained significant prominence as a dominant approach in diverse generative tasks, such as image synthesis <cit.> and text generation <cit.>. They demonstrate superior generative capabilities compared to alternative models such as GANs <cit.> and VAEs <cit.>, which can be attributed to their precise approximation of the underlying data generation distribution and provision of enhanced training stability. Recently, diffusion models have been employed in the field of sequential recommendation. Some methods <cit.> directly utilize diffusion models as the fundamental architecture for sequential recommendation. Specifically, these methods employ a left-to-right unidirectional Transformer to extract guidance signals for the generation of the next item. In contrast, other approaches <cit.> adopt a two-stage paradigm for data augmentation. Initially, they train a diffusion model to generate pseudo user interactions aimed at expanding the original user sequences. These augmented datasets are then used to train downstream recommendation models. It should be noted that they solely rely on the unidirectional information of user behavior sequences as the diffusion guidance. Different from these existing methods, our approach leverages the diffusion model for contrastive learning. Specifically, we employ the diffusion model to generate semantic-consistent augmented views of the original sequences and maximize the agreement among different views from the same user. To the best of our knowledge, this is the first instance of employing diffusion models for contrastive learning in the field of sequential recommendation. § PRELIMINARY In this section, we first define our problem statement, followed by introducing basic knowledge of diffusion models. §.§ Problem Statement The primary objective of sequential recommendation is to provide personalized recommendations for the next item to users, leveraging their historical interactions. We denote the user and item sets as 𝒰 and 𝒱, respectively. Each user u∈𝒰 has a chronological sequence of interacted items 𝐬^u = [v^u_1, v^u_2..., v^u_|𝐬^u|], where v^u_t indicates the item that u interacted with at step t, and |𝐬^u| is the number of interacted items of user u. The goal is to predict the next item at time step |𝐬^u|+1 according to 𝐬^u, which can be formulated as: max_v_i ∈𝒱P(v_|𝐬^u|+1=v_i|𝐬^u), where the probability P represents the likelihood of item v_i being the next item, conditioned on 𝐬_u. §.§ Diffusion Models We provide an introduction to the fundamental principles of diffusion models based on DDPM <cit.>. Typically, a diffusion model consists of forward and reverse processes. Given a data point sampled from a real-world data distribution 𝐱_0 ∼ q(𝐱_0), the forward process gradually corrupts 𝐱_0 into a standard Gaussian noise 𝐱_T ∼ N(0; 𝐈), which is formulated as: q(𝐱_1:T|𝐱_0) = ∏_t=1^T q(𝐱_t|𝐱_t-1), q(𝐱_t|𝐱_t-1) = 𝒩(𝐱_t;√(1-β_t)𝐱_t-1,β_t 𝐈), where β_t ∈ (0,1) is the variance scale at time step t. After the completion of the forward process, the reverse denoising process aims to gradually reconstruct the original data 𝐱_0. This is achieved by sampling from 𝐱_T using a learned diffusion model, which can be formulated as: p_θ(𝐱_0:T) =p(𝐱_T) ∏_t=1^T p_θ(𝐱_t-1|𝐱_t), p_θ(𝐱_t-1|𝐱_t) = 𝒩(𝐱_t-1; μ_θ (𝐱_t, t),Σ_θ(𝐱_t,t) ). Training can be performed by optimizing the variational lower bound of log p_θ(𝐱_0): ℒ_vlb(𝐱_0) = 𝔼_q(𝐱_1:T|𝐱_0) [ logq(𝐱_T|𝐱_0)/p_θ (𝐱_T) + ∑_t=2^T logq(𝐱_t-1|𝐱_0,𝐱_t)/p_θ (𝐱_t-1|𝐱_t)-log p_θ(𝐱_0|𝐱_1) ]. <cit.> further propose to utilize the KL divergence for more efficient training, which directly compares p_θ(𝐱_t-1|𝐱_t) against forward process posteriors, resulting in a mean-squared error loss: ℒ_simple(𝐱_0) = ∑_t=1^T 𝔼_q(𝐱_t | 𝐱_0) ||μ_θ(𝐱_t, t) - μ̂(𝐱_t,𝐱_0) || ^2 , where μ_θ(𝐱_t, t) is the predicted mean of p_θ(𝐱_t-1 | 𝐱_t) computed by a neural network, and μ̂(𝐱_t,𝐱_0) is the mean of the posterior q(𝐱_t-1 | 𝐱_t,𝐱_0), which is tractable when conditioned on 𝐱_0. § METHOD This section begins with an introduction to our base model for sequential recommendation. Next, we present our core method DiffCLRec, which incorporates the semantic-consistent contrastive learning framework and the context-guided diffusion model, as shown in Figure <ref>. Finally, we conclude this section by presenting an end-to-end training objective. §.§ Sequential Recommendation Model Similar to previous studies, our proposed contrastive learning method serves as an auxiliary task for improving the performance of SR model. We adopt the SASRec <cit.> as our base SR model, which comprises the embedding layer, the transformer layer, and the prediction layer. Note that while our proposed method is implemented based on SASRec, it can be applied to any embedding-based SR models. §.§.§ Embedding Layer We create an item embedding matrix 𝐌∈ℝ^|𝒱| × d for the item set, where d represents the latent dimensionality. Given a user sequence 𝐬=[v_1, v_2,...,v_n] where n is the max sequence length, we can obtain the input embedding vectors 𝐞=[𝐞_1,𝐞_2,...,𝐞_n] ∈ℝ^n× d with respect to 𝐬. In addition, we construct a position embedding matrix 𝐏∈ℝ^n × d. For time step t of the sequence, we add the item embedding 𝐞_t and the corresponding position embedding 𝐩_t, resulting the final input vector at step t 𝐡^0_t = 𝐞_t + 𝐩_t, and 𝐡^0=[𝐡^0_1, 𝐡^0_2,...,𝐡^0_n] denotes the representation of input sequence 𝐬. §.§.§ Transformer Layer Following the embedding layer, the input vector 𝐡^0 is passed through an L-layer Transformer to update the representation of each item in the sequence. Each Transformer (Trm) block consists of a self-attention layer and a feed-forward network layer, which can be formulated as: 𝐡^L = Trm(𝐡^0 ) , where 𝐡^L ∈ℝ^n × d denotes the hidden states of the last layer, and the vector of the last position 𝐡^L_n ∈ℝ^d is used to represent the whole user sequence. §.§.§ Prediction Layer For next item prediction, we first calculate the similarities between the user sequence representation vector 𝐡^L_n and item embedding vectors through an inner-product as: 𝐫 = 𝐡^L_n 𝐌^T, where 𝐫∈ℝ^|𝒱|, and r_i is the likelihood of v_i being the next item. The items are then ranked based on 𝐫 to generate the top-k recommendation list. following SASRec <cit.>, we adopt the Binary Cross-Entropy (BCE) loss with negative sampling to train our model. ℒ_rec = - ∑_u∈𝒰∑_t=1^n log(σ(𝐡^L_t·𝐞_v_t+1)) + log(1- σ(𝐡^L_t·𝐞_v_j^-)), where we pair each ground-truth item v_t+1 with one negative item v_j^-that is randomly sampled from the item set. §.§ Semantic-Consistent Contrastive Learning In this section, we introduce the proposed semantic-consistent contrastive learning framework. As mentioned above, existing methods neglect the context information of the user behavior sequence, thereby potentially compromising the semantic consistency. In contrast to previous methods, we propose to utilize context information as a guidance for generating semantically consistent augmented views, which are used for contrastive learning. Our core idea comprises randomly selecting items at specific positions within a sequence and replacing them with context-aligned and semantic-consistent items. Specifically, given a user sequence 𝐬^u, we randomly select a subset of items with a pre-defined ratio ρ, and the position indices of the selected items within the sequence are recorded as 𝐚_1. Next, we employ the context-guided diffusion model to generate items, which are used to replace the 𝐚_1 positions of original sequence 𝐬^u, resulting the augmented sequence 𝐬^u_1. That is, the sole distinction between the original sequence 𝐬^u and the augmented sequence 𝐬^u_1 is the replacement of selected items from 𝐬^u with context-aligned items generated by the diffusion model. The details of the proposed diffusion model will be introduced in detail in Sec. <ref>. By repeating a similar operation, we can obtain another augmented sequence 𝐬^u_2 corresponding to another set of random selected indices 𝐚_2. Then, we adopt the contrastive loss to maximize the agreement between two different augmented views of the same user historical sequence and minimize the difference between the augmented sequences derived from different users. For 𝐬^u_1 and 𝐬^u_2 of each user u, we first get their embeddings and then input them to the transformer layer to generate their representation 𝐡̃^u_1 and 𝐡̃^u_2 according to Eq. (<ref>). We consider 𝐡̃^u_1 and 𝐡̃^u_2 as the positive pair, while the remaining 2(N-1) augmented representation within the same batch are treated as negative samples 𝐇^-, where N is the batch size. We employ the inner product to assess the representation similarity. Finally, we define the loss function ℒ_cl in a similar manner to the widely used cross-entropy loss as follows: ℒ_cl^u = -logexp(sim(𝐡̃^u_1, 𝐡̃^u_2))/exp(sim(𝐡̃^u_1, 𝐡̃^u_2))+ ∑_𝐡̃^- ∈𝐇^-exp(sim(𝐡̃^u_1, 𝐡̃^-)) , where sim(·) denotes the inner product of vectors. This section presents the architecture of our proposed diffusion model (as shown in Figure <ref> (right)), and how to use it for semantic-consistent augmented sequences generation. A direct use of the diffusion model for item generation cannot guarantee the semantic consistency, as the generation does not take into account the context information. To solve this issue, we propose a context-guided diffusion model. Specifically, we utilize both preceding and succeeding items to capture context information, which acts as an important guidance for item generation. §.§.§ Noising at Partial Positions At the start of the forward process, we incorporate a Markov transition from discrete input items to a continuous space using the embedding map, following Diffusion-LM <cit.>. This transition is parametrized by q_ϕ(𝐳_0|𝐬) = 𝒩(𝐞, β_0 𝐈), where 𝐞 represents the embedding vectors corresponding to the sequence 𝐬 as defined in Section <ref>. This transformation allows us to integrate the discrete sequence into the standard forward process. At each forward step q(𝐳_t|𝐳_t-1), we incrementally add Gaussian noise into the hidden states of the previous time step 𝐳_t-1, to obtain 𝐳_t. Unlike other diffusion models, we selectively apply noise to items at randomly chosen positions with a certain ratio ρ instead of the entire sequence, while retaining the items at the remaining positions. This approach allows the hidden vectors at the remaining positions to act as the conditional guidance during the reverse phase, enabling our model to utilize context information for controlling item generation. §.§.§ Denoising with Context Condition The objective of the denoising process is to gradually remove noise starting from 𝐳_T and ultimately recover the original 𝐳_0, which is formulated as: p_θ(𝐳_0:T) =p(𝐳_T) ∏_t=1^T p_θ(𝐳_t-1|𝐳_t). We use a learnable model f_θ(𝐳_t, t) to model the reverse process at each step: p_θ(𝐳_t-1|𝐳_t) = 𝒩(𝐳_t-1; μ_θ (𝐳_t, t),Σ_θ(𝐳_t,t)). Following Diffusion-LM <cit.>, we incorporate a trainable rounding step p_θ(𝐬|𝐳_0) = ∏_i=1^n p_θ (v_i|z_i) in the reverse process to map the hidden states back to the embedding space, where p_θ (v_i|z_i) is a softmax distribution. Following previous methods <cit.>, we set Σ_θ(𝐳_t,t) to untrained time dependent constants. Note that only the hidden vectors corresponding to items belong to selected positions are subjected to the addition of noise during the forward process. Therefore, during the reverse process, the hidden vectors of items at the remaining positions (i.e., context information) can serve as the condition. Here, we require a model that can effectively leverage the context information to learn the conditional probability distribution for selected positions, thereby guiding the item generation. The bidirectional Transformer <cit.> offers an exciting alternative for achieving this goal. Due to the equipment of bidirectional self-attention mechanism and the position encoding, the bidirectional Transformer can capture a more comprehensive understanding of the context from both left and right items. Therefore, we employ a bidirectional Transformer to model f_θ. The architecture of the our proposed encoder model is shown in Fig. <ref> (c), which is constructed by stacking L^' blocks together. Each block consists of a multi-head self-attention layer and a position-wise feed-forward network. To train our model, we compute the variational lower bound following previous methods <cit.>. As we have incorporated the embedding step and rounding step, the variational lower bound loss ℒ_vlb introduced in Eq. (<ref>) now becomes as follows: ℒ_vlb^' = 𝔼_q_ϕ(𝐳_0|𝐬)[ ℒ_vlb(𝐳_0) + log q_ϕ(𝐳_0|𝐬) - log p_θ(𝐬|𝐳_0) ]. Following previous methods <cit.>, this training objective can be further simplified as: ℒ_vlb^s = ∑_t=2^T||𝐳_0 - f_θ(𝐳_t,t) ||^2 + || 𝐞 - f_θ(𝐳_1,1)||^2 - log p_θ (𝐬|𝐳_0) →∑_t=2^T||𝐳̃_0 - f̃_θ(𝐳_t,t) ||^2 + || 𝐞̃ - f̃_θ(𝐳_1,1)||^2 - log p_θ (𝐬|𝐳_0) , where 𝐳̃_0, f̃_θ, and 𝐞̃ denote the part of 𝐳_0, f_θ, and 𝐞 corresponding to selected position indices, respectively. Note that while we only calculate the loss with respect to the selected positions in the first term, the reconstruction of the selected items 𝐳̃_0 also takes into account the remaining items (i.e., both left and right items) of the sequence due to the bidirectional encoder. §.§.§ Generating Augmented Views In the generation phase, we target to generate context-aligned items for arbitrary position indices τ, given the user sequence 𝐬. Specifically, we randomly sample 𝐳̃_T ∼ N(0; 𝐈) to replace the fractions of sequence embeddings 𝐞 corresponding to selected position indices to obtain 𝐳_T. Then, we can iterate the reverse procedure until we reach the initial state 𝐳_0. Following DiffuSeq <cit.>, for each step, we adopt the following operations: 1) performing rounding on 𝐳_t to map it back to item embedding space; 2) replacing the part of recovered 𝐳_t-1 that does not belong to selected positions τ with the original item embeddings. Note that due to the different initial random noise, the generated items with the same context information will exhibit a certain level of diversity. Finally, through the substitution of generated items into the corresponding positions of the original sequence, an augmented sequence is obtained, ensuring semantic consistency. §.§ End-to-End Training As both of the diffusion model and the recommendation model rely on item embeddings, employing separate sets of item embeddings would result in a misalignment between the representation spaces of the diffusion and recommendation models. As a result, the recommendation model cannot directly leverage the items generated through the diffusion model. To overcome this challenge, we propose to share item embeddings between the diffusion model and SR model, and train the full framework in an end-to-end manner. Therefore, the objective function is formulated as: ℒ = ℒ_rec + αℒ_cl + βℒ_vlb^s, where α and β are hyperparameters that determine the weightings. § EXPERIMENTS §.§ Experimental Settings §.§.§ Datasets We conduct experiments on five real-world public datasets, including MovieLens, Beauty, Sports, Toys, and Yelp. The statistics of these datasets are shown in Table <ref>. These datasets encompass a wide range of application scenarios. The MovieLens[https://grouplens.org/datasets/movielens/] dataset is a stable benchmark dataset which collects movie ratings provided by users. Beauty, Sports, and Toys datasets are obtained from Amazon[http://jmcauley.ucsd.edu/data/amazon/], one of the largest e-commerce platforms globally. Yelp is a renowned dataset primarily used for business recommendation. We adopt the same preprocessing method as employed in numerous previous studies <cit.>, filtering items and users with fewer than five interaction records. §.§.§ Evaluation Metrics To evaluate the performance of our model and baseline models, we adopt widely used evaluation metrics Hit Rate (HR) and Normalized Discounted Cumulative Gain (NDCG). We adopt the common leave-one-out strategy that employs the last and second last interactions for testing and validation, respectively. The remaining interactions are used for training. For evaluating the performance, we rank all the items in the item set and calculate the metrics according to the ranking of the whole item set. §.§.§ Baseline Methods To verify the effectiveness of our method, we compare it with the following baseline methods: * BPR-MF <cit.>. It employs matrix factorization to model users and items. It uses the pairwise Bayesian Personalized Ranking (BPR) loss to optimize the model. * SASRec <cit.>. It is the first work to utilize the self-attention mechanism for sequential recommendation. * Caser <cit.>. It utilizes a CNN-based approach to model high-order relationships in the context of sequential recommendation. * BERT4Rec <cit.>. It employs the BERT <cit.> framework to capture the context information of user behaviors. * S^3-Rec <cit.>. It leverages self-supervised learning to uncover the inherent correlations within the data. However, its primary emphasis lies in integrating the user behavior sequence and corresponding attribute information. * CL4SRec <cit.>. It proposes three random augmentation operators to generate positive samples for contrastive learning. * DuoRec <cit.>. It combines a Dropout-based model-level augmentation and a novel sampling strategy for choosing hard positive samples. * MCLRec <cit.>. It integrates data-level and model-level augmentation strategies, utilizing CL4SRec's random data augmentation for the input sequence and employing MLP layers for model-level augmentation. * DiffuASR <cit.>. It leverages the diffusion model to generate pseudo items and concatenates them at the beginning of raw sequences. Then, the extended sequences are fed into a downstream recommendation model for next item prediction. * DreamRec <cit.>. It directly utilizes the diffusion model to generate the next item with the guidance of historical interactions. §.§.§ Implementation Details We implement all baseline methods according to their released code. The embedding size for all methods is set to 64. Our method utilizes a Transformer architecture for the SR model, comprising 2 layers and 2 attention heads. Meanwhile, the diffusion model employs a bidirectional Transformer with 1 layer and 2 attention heads. The total number of diffusion steps is set to a fixed value of 1000. We explore the substitution ratio ρ within the range of [0.1, 0.2, 0.4, 0.6]. The Dropout rate is chosen from the set {0.1, 0.2, 0.3, 0.4, 0.5} for both the embedding layer and the hidden layers. Additionally, we set the training batch size to 256 and employ the Adam optimizer with a learning rate of 0.001. Following SASRec <cit.>, we set the max sequence length to 50 for the Beauty and Sports datasets, and to 200 for the MovieLens dataset. For sequences with fewer interactions than the maximum sequence length, we will pad them with a padding token to match the max sequence length. It is noteworthy that for the recommendation task, the majority of baseline models employ the negative sampling strategy during the training process to avoid issues with efficiency and memory overflow due to excessively large item sets. Specifically, one negative sample is randomly selected for each positive sample, and optimization is performed using the BCE loss. However, certain baseline methods (DuoRec and MCLRec) do not perform negative sampling. Instead, they compute the probability of each item across the whole item set using the softmax function. This approach is typically impractical when the item set is considerably large. In our initial experiments, we observed that these two training strategies can lead to significant differences in the recommendation task, thereby impeding our ability to accurately assess the effectiveness of the contrastive learning task. Therefore, we need to standardize the training strategy for the recommendation task across all methods to ensure a fair comparison that is exclusively focused on evaluating the impact of contrastive learning. Specifically, we utilize BCE loss with the negative sampling strategy (defined in Eq. (<ref>)) for all methods. §.§ Experimental Results We compare our model with 10 baseline models, which can be divided into three categories: contrastive learning based methods (S^3-Rec, CL4SRec, DuoRec, MCLRec), diffusion based methods (DiffuASR, DreamRec) and others (BPR-MF, Caser, SASRec, BERT4Rec). We run each experiment five times and report the average. The results across all datasets are presented in Table <ref>. Based on the results, we can make the following observations: * Our method consistently outperforms all the baseline models on all five datasets. Additionally, we conduct a pair-t test, which reveals that our method achieves significantly better performance than the strongest baseline model, with a significance level of 0.01. * Classical methods (BPR-MF, Caser, SASRec, BERT4Rec) that do not employ self-supervised learning tend to yield poor performance compared to the methods that integrate data augmentation and self-supervised contrastive learning. This suggests that contrastive learning, serving as an auxiliary task, facilitates more comprehensive learning of user behavior sequence representations in the presence of limited data. * Our method consistently outperforms other contrastive learning based methods across all metrics on all datasets. CL4SRec introduces three random data augmentation operations for contrastive learning based on SASRec, achieving better performance than SASRec. DuoRec and MCLRec enhance the contrastive learning further by incorporating model-level learnable augmentation, resulting in certain improvements compared to CL4SRec. However, all these methods use random augmentation operations which introduce uncertainty and neglect the semantic consistency. Instead, our model leverages context information to preserve semantic consistency, leading to superior performance. * Our model performs significantly better than existing diffusion model based SR methods. DiffuASR utilizes the diffusion model for data augmentatiom, which generates pseudo items from right to left, aiming to extend original sequences. This augmentation strategy resembles the reverse multi-step sequential recommendation task, which is extremely challenging and easy to introduce noisy data. Furthermore, they feed the extended sequences to the recommendation model, leading to error accumulation. In contrast, Dreamrec directly utilize the diffusion model to generate the next item based on the historical items, making it a relatively strong baseline. Different from these diffusion based methods, our method utilizes the guided diffusion model to generate semantically consistent augmented views for better contrastive learning. With the context guidance, our model can generate alternative items adhere to the same conditional distribution as the original items. The results show that DiffCLRec consistently outperforms both diffusion model based baselines on all datasets. §.§ Ablation Study In this section, we demonstrate the effectiveness of our model by comparing its performance with three different versions (depicted in Figure <ref>). Our model is built upon the SASRec backbone. The term “SASCL” in the diagram represents the incorporation of contrastive learning on top of SASRec. It's worth noting that SASCL only introduces contrastive learning based on random substitution. In this approach, selected positions are replaced with random items to generate augmented sequences for contrastive learning. This is equivalent to removing the core diffusion module from our model. We conduct experiments on three datasets (Beauty, Sports, ML-1m). The experimental results reveal that SASCL exhibits improvements compared to the base model on the relatively sparse datasets (beauty and sports), suggesting that the contrastive learning paradigm based on random augmentation partially mitigates the data sparsity issue. However, it does not achieve the same level of performance as the base model on the ML-1m dataset. Building upon these findings, we introduce DiffCLRec. In this approach, the augmentation for contrastive learning is no longer random but is generated using our proposed diffusion model. The results demonstrate that the introduction of the diffusion model for generating augmented data leads to the best performance across all three datasets in terms of all metrics. This suggests that the diffusion module effectively utilizes context information to generate higher-quality augmented samples for contrastive learning, thereby enhancing the overall performance. §.§ Hyperparameter Study In this section, we investigate the impacts of two important hyperparameters (ρ and β) on HR and NDCG metrics of Beauty and ML-1m datasets. The results are shown in Figure <ref>. As the substitution ratio ρ gradually increases from 0 to 0.6, the model's performance initially improves and then declines. This phenomenon can be attributed to the negative effect of excessive ρ on the information within the original sequence, resulting in significant disparities among the augmented views. Note that when ρ is set to 0, no replacements are made, indicating the absence of contrastive learning, which results in the poorest performance. Therefore, to enhance the effectiveness of contrastive learning, it is advisable to select an appropriate ρ for data augmentation, approximately 0.2 for Beauty and approximately 0.1 for MovieLens. β controls the weight of the diffusion model loss in the total loss. As β varies, the values of HR and NDCG show minimal changes. The overall trend is still an initial increase followed by a slight decline. The model achieves optimal performance on Beauty with β of 0.2 and on MovieLens with β of 0.3. The model attains optimal performance on Beauty with a β value of 0.2 and on MovieLens with a β value of 0.3. §.§ Robustness Analysis In order to further examine the robustness of our method in the presence of sparse data, such as limited historical behaviors, we categorize the user behavior sequences into three groups based on their length and analyze the evaluation results for each group. Figure <ref> presents the comparison results for the Beauty dataset. Through a comparison between our model and the representative baseline models including the strongest baseline MCLRec, the following observations can be made: 1) The performance of all these models deteriorates as the interaction frequency decreases, indicating the influence of data sparsity on model performance. 2) Our model consistently outperforms the baseline models in each user group. Even for the group with the most limited data (sequence length is 5), our model still maintains a significant lead, showcasing the positive influence of our context-aware diffusion-based contrastive learning approach in addressing data sparsity. This finding also underscores the robustness of our model across various degrees of data sparsity in user sequences. § CONCLUSION In this paper, we propose a diffusion-based contrastive learning method for sequential recommendation. To address the semantic inconsistency issue, we employ a context-guided diffusion model to generate semantic-consistent augmented sequences with respective to original sequences for contrastive learning. Furthermore, to align the representation space of the diffusion model and the SR model, we propose an end-to-end learning framework to train our model. Extensive experiments on three benchmark datasets are conducted. The results demonstrate the advantages of our proposed method over existing baselines ACM-Reference-Format
http://arxiv.org/abs/2405.10170v1
20240516150756
A Mess of Memory System Benchmarking, Simulation and Application Profiling
[ "Pouya Esmaili-Dokht", "Francesco Sgherzi", "Valeria Soldera Girelli", "Isaac Boixaderas", "Mariana Carmin", "Alireza Momeni", "Adria Armejach", "Estanislao Mercadal", "German Llort", "Petar Radojkovic", "Miquel Moreto", "Judit Gimenez", "Xavier Martorell", "Eduard Ayguade", "Jesus Labarta", "Emanuele Confalonieri", "Rishabh Dubey", "Jason Adlard" ]
cs.AR
[ "cs.AR" ]
Influence Maximization in Hypergraphs using Multi-Objective Evolutionary Algorithms Stefano GenettiEqual contribution.0009-0004-2417-0319 Eros Ribaga10009-0005-6345-6528 Elia Cunegatti0000-0002-1048-0373 Quintino F. Lotito0000-0001-7084-8339 Giovanni Iacca0000-0001-9723-1830 May 20, 2024 ========================================================================================================================================================================================================= firstpage plain The Memory stress (Mess) framework provides a unified view of the memory system benchmarking, simulation and application profiling. The Mess benchmark provides a holistic and detailed memory system characterization. It is based on hundreds of measurements that are represented as a family of bandwidth–latency curves. The benchmark increases the coverage of all the previous tools and leads to new findings in the behavior of the actual and simulated memory systems. We deploy the Mess benchmark to characterize Intel, AMD, IBM, Fujitsu, Amazon and NVIDIA servers with DDR4, DDR5, HBM2 and HBM2E memory. The Mess memory simulator uses bandwidth–latency concept for the memory performance simulation. We integrate Mess with widely-used CPUs simulators enabling modeling of all high-end memory technologies. The Mess simulator is fast, easy to integrate and it closely matches the actual system performance. By design, it enables a quick adoption of new memory technologies in hardware simulators. Finally, the Mess application profiling positions the application in the bandwidth–latency space of the target memory system. This information can be correlated with other application runtime activities and the source code, leading to a better overall understanding of the application's behavior. The current Mess benchmark release covers all major CPU and GPU ISAs, x86, ARM, Power, RISC-V, and NVIDIA's PTX. We also release as open source the ZSim, gem5 and OpenPiton Metro-MPI integrated with the Mess memory simulator for DDR4, DDR5, Optane, HBM2, HBM2E and CXL memory expanders. The Mess application profiling is already integrated into a suite of production HPC performance analysis tools. § INTRODUCTION The importance of the main memory in the overall system’s design <cit.> drives significant effort for memory system benchmarking, simulation, and memory-related application profiling. Although these three memory performance aspects are inherently interrelated, they are analyzed with distinct and decoupled tools. Memory benchmarks typically report the maximum sustainable memory bandwidth <cit.> or performance of the bandwidth-limited application kernels <cit.>. This is sometimes complemented with latency measurements in unloaded memory systems <cit.> or for a small number of memory-usage scenarios <cit.>. Memory simulators determine the memory system response time for a given traffic. Simple simulators model memory with a fixed latency, or calculate its service time based on queueing theory or simplified DDR protocols <cit.>. Dedicated cycle-accurate memory simulators consider detailed memory device sequences and timings <cit.>. Application profiling tools determine whether applications are memory bound based on the memory access latency <cit.>, the position in the Roofline model <cit.> or the the memory-related portion of the overall CPI stack <cit.>. Our study argues that the memory system benchmarking, simulation and application profiling can and should be based on a unified view of memory system performance. We provide this view with the Memory stress (Mess) framework comprised of the Mess benchmark, memory simulator and application profiling tool (Figure <ref>). Mess benchmark (Section <ref>) describes the memory system performance with a family of bandwidth–latency curves. The benchmark covers the full range of the memory traffic intensity, from the unloaded to fully-saturated memory system. It also considers numerous compositions of read and write operations, plotted with different shades of blue in Figure <ref> (middle). The Mess benchmark is designed for holistic and detailed memory system characterization that is easily adaptive to different target platforms. The current benchmark release covers all major CPU and GPU ISAs, x86, ARM, Power, RISC-V, and NVIDIA's Parallel Thread Execution (PTX), and it is applied to a number of actual hardware platforms and simulators. -1 We deploy the Mess benchmark to characterize Intel, AMD, IBM, Fujitsu and Amazon servers as well as NVIDIA GPUs with DDR4, DDR5, HBM2 and HBM2E memory (Section <ref>). We report and discuss a wide range of memory system behavior even for the hardware platforms based on the same standard. These differences are especially pronounced in the high-bandwidth areas which have the greatest impact on memory-intensive applications. To the best of our knowledge, we are the first ones to detect and analyze the effect of memory system over-saturation, a scenario in which further increase of the memory request rate causes the system performance drop. We also use the Mess benchmark to evaluate memory models of the widely-used hardware simulators: event-based ZSim <cit.>, cycle-accurate gem5 <cit.> and RTL simulator OpenPiton Metro-MPI <cit.> (Section <ref>). Unfortunately, all tested memory simulators, including well-established and trusted gem5 DDR models and cycle-accurate memory simulators <cit.>, poorly resemble the actual system performance. The simulators show an unrealistically low load-to-use latency (starting at 4 ns), high memory bandwidth (exceeding 1.8× the maximum theoretical one), and a simulation error of tens of percents for memory-intensive benchmarks, STREAM <cit.>, LMbench <cit.> and Google Although it was not the initial design target, we also detected a case in which holistic and detailed Mess benchmarking diagnosed a CPU design error. In particular, Mess benchmarking of the OpenPiton measured an unexpected memory traffic, which led us to a bug in the coherency protocol generated by the OpenPiton framework. Apart from the memory system characterization, the Mess bandwidth–latency curves can be also used for the memory performance simulation (Section <ref>). We develop and integrate the Mess memory simulator in the ZSim, gem5, and OpenPiton Metro-MPI simulators, enabling simulation of high-end memory systems based on DDR4, DDR5, Optane, HBM2, and HBM2E technologies, and Compute Express Link (CXL) <cit.>. The Mess integration is easy, based on the standard interfaces in the CPU simulators that enable external memory simulation <cit.>. The Mess simulator closely matches the actual memory systems performance. The simulation error for memory-intensive STREAM, LMbench and Google multichase is between 0.4% and 6%, which is significantly better than all other memory models we tested. Mess is also fast. For example, Mess integrated with ZSim introduces only 26% simulation time increment over the fixed-latency memory model, while the speed-up over the Ramulator and DRAMsim3 ranges between 13× and 15×. Another advantage of the Mess simulator is that it removes the current lag between the emergence of memory technologies and development of the reliable simulation models. For example, Mess is the first memory simulator that models CXL memory expanders, enabling further research on these novel memory devices. The simulation is based on the bandwidth–latency curves obtained from Micron Technology’s SystemC hardware model (Section <ref>). -1 Finally, the Mess memory bandwidth–latency curves enhance application profiling and performance analysis (Section <ref>). Mess application profiling determine positions of application execution time segments on the corresponding memory bandwidth–latency curves. The application memory stress can be combined with the overall application timeline analysis and can be linked to the source code. The Mess application profiling is already integrated into a suite of production HPC performance analysis tools <cit.>. The inherent dependency between the used memory bandwidth and the access latency is by no means a new phenomenon, and the community has known about it at least for a couple of decades <cit.>. The Mess framework extends the previous work in three important aspects. Previous studies typically use a single bandwidth–latency memory curve to illustrate a general memory system behavior <cit.>. The Mess benchmark is designed for holistic, precise and close-to-the-hardware memory system performance characterization. To reach this objective, the Mess benchmark is developed directly in the assembly, and the experiments are tailored to minimize and mitigate the impact of the system software. Mess benchmark performs a detailed characterization of the particular system under study. It detects and quantifies some aspects of the memory systems behavior not discussed in previous studies, such as the impact of the read and write memory traffic on performance, or discrepancies between different memory systems, both in actual platforms and hardware simulators. Second, the Mess framework tightly integrates the memory performance characterization into the memory simulation. The Mess simulator avoids complex memory system simulation and simply adjusts the rate of memory instructions (provided by the CPU simulator) to the actual memory performance. The accuracy of this simulation approach relies on the input memory performance characterization. This characterization therefore has to be holistic, detailed and specific to the system under study, closely matching the Mess benchmark design. Third, the framework closely couples the memory-related profiling of hardware platforms and applications. Similar to the Mess simulator, the Mess application profiling itself is uncomplicated, and its real value comes from the application analysis in the context of the memory system characteristic. The quality of the memory characterization therefore directly impacts the the quality of the overall analysis. For this reason, the analysis cannot be performed with generic bandwidth–latency approximations, but it requires detailed Mess-like memory performance description. -1 The Mess benchmark is released as open source and it is ready to be used in x86, Power, ARM, and RISC-V CPUs and NVIDIA GPUs. The release also contains all bandwidth–latency measurements shown in the paper, including the CXL expander curves provided by Micron Technology. We also release as open source the ZSim, gem5 and OpenPiton Metro-MPI integrated with the Mess memory simulator supporting DDR4, DDR5, Optane, HBM2, HBM2E and CXL expanders. Also, the public releases of the production HPC performance analysis tools already include the Mess application profiling extension. The released tools are ready to be used by the community for better understanding of the current and exploration of future memory systems. § MESS BENCHMARK This section describes the Mess benchmark use, and analyzes its characterization of actual platforms and hardware simulators. §.§ Mess benchmark: Memory bandwidth–latency curves -1 The Mess memory characterization comprises tens of bandwidth–latency curves, each corresponding to a specific ratio between the read and write memory traffic. The Mess benchmark kernels cover the whole range of memory operations, from 100%- to 100%-. The 100%- kernel generates a 100%-read memory traffic, while the 100%- kernel creates 50%-read/50%-write traffic. This is because the contemporary CPUs deploy the write-allocate cache policy <cit.>. With this policy, each store instruction first reads data from the main memory to the cache, then modifies it, and finally writes it to the main memory once the cache line is evicted. Each store instruction, therefore, does not correspond to a single memory write, but to one read and one write.[Memory traffic with more than 50% of writes can be generated with non-temporal (streaming) stores that directly store data in the main memory. These instructions, however, are not supported in all HPC platforms. Even when supported, they are seldom used. In SPEC CPU2006 <cit.>, SPEC CPU2017 <cit.>, and SPLASH-2 <cit.> benchmarks running on the Intel Skylake server with Intel Pin tool <cit.>, the non-temporal stores correspond to <1% of the overall memory instructions.] The Mess bandwidth–latency curves are illustrated in Figure <ref> (middle). The curves with different composition of read and write traffic are plotted with different shades of blue. Each curve is constructed based on tens of measurement points that cover the whole range of memory-traffic intensity. Figure <ref> (top) illustrates the construction process for one of the curves. The memory access latency is measured with a pointer-chase benchmark <cit.> executed on one CPU core or one GPU Stream Multiprocessor (SM). This determines the y-axis position of the measurement. The x-axis corresponds to memory bandwidth, monitored with hardware counters <cit.>. Running pointer-chase alone measures the unloaded memory access latency. To measure the latency of the loaded memory system, concurrently with the pointer-chase, on the remaining CPU cores or GPU SMs, we run a memory traffic generator. The traffic generator is designed to create a wide range of memory traffic with configurable memory bandwidth utilization and read/write ratio. Both, pointer-chase and traffic generator are implemented in assembly to minimize any compiler intervention. The detailed implementations of pointer-chase and traffic generator is presented in Appendix <ref>. To minimize the latency penalties introduced by the TLB misses and the page walk, the Mess data-structures are allocated in huge memory pages. Additionally, at runtime, these overheads are monitored with hardware counters and subtracted from the memory latency measurements. The Mess benchmark release includes the source code and its detailed description, as well as the instructions for the benchmark compilation and execution. §.§ Validation The Mess benchmark provides a holistic and detailed memory characterization exceeding all existing benchmarks and tools. Still, these tools can validate some of the Mess measurements. The unloaded memory system latency can be measured with LMbench and Google multichase in CPU platforms, and P-chase <cit.> in GPUs. We used these benchmarks to validate the Mess unloaded latency measurements in all hardware platforms under study, see Table <ref>. In all experiments, Mess closely matches the LMbench, Google multichase and P-chase results. -1 In Intel systems, the maximum sustained memory bandwidth can be measured with the Intel Advisor <cit.>. In the Skylake, Cascade Lake and Sapphire Rapids servers under study, the Mess benchmark matches the Advisor measurements, with a difference below 1%. -1 The Intel Memory Latency Checker (MLC) <cit.>, can measure the memory latency for a selected memory traffic intensity, i.e. memory bandwidth. The memory bandwidth can be fine-tuned, but the tool provides a sparse analysis of different traffic compositions, i.e. read and write memory operations. We compare the Intel MLC results and the corresponding subset of the Mess measurements for all Intel platforms under study. The MLC and Mess results show the same trend, with slightly lower (<5%) latencies reported by the Mess benchmark. This is because the Mess is designed for close-to-the-hardware memory characterization, so it excludes the latency penalties introduced by the OS overheads, the TLB misses and the page walk. §.§ Performance analysis -1 Figure <ref> illustrates the Mess benchmark performance analysis with an example of the 24-core Intel Skylake server with six DDR4-2666 memory channels <cit.>. The figure confirms a general trend of the memory access latency which is initially roughly constant and then increases with higher memory pressure (i.e. bandwidth) due to resource contention among parallel accesses <cit.>. Detailed and close-to-the-hardware Mess characterization reveals some memory system aspects not discussed by previous studies. The most important one is the impact of the read and write memory traffic. The best performance, the lowest latency and the highest achieved bandwidth, are obtained for 100%-read traffic. Memory writes reduce the memory performance and reach the saturation point sooner. This is due to the extra timing constraints such as t_WR and t_WTR, which come with memory write operations <cit.>. We detect this behavior for all the Intel, AMD, IBM, Fujitsu, and Amazon servers as well as NVIDIA GPUs used in the study with DDR4, DDR5, HBM2 and HBM2E memory (Section <ref>). However, we see a very different write traffic impact on CXL memory expanders <cit.>. We analyze this behavior of CXL memory expanders in Section <ref>. -1 Apart from the memory bandwidth–latency curves, we use the Mess benchmark to derive memory system performance metrics, also depicted in Figure <ref>, for quantitative comparison of different memory systems. In addition to the commonly-used unloaded memory latency, the detailed Mess characterization quantifies the maximum latency range of memory access latencies for all read-to-write ratios and the saturated bandwidth range. The memory system is saturated when any further increase in the memory system pressure, i.e. bandwidth, leads to a high increase in the memory access latency. We consider that the saturated bandwidth area starts at the point in which the memory access latency doubles the unloaded latency. In some of the platforms under study, we also detect a memory system over-saturation. Any increase in the memory system pressure after the over-saturation point causes a decline in the used memory bandwidth, while the access latency continues to increase. This behavior is observed in the bandwidth–latency curves as a “wave form” seen in Figure <ref>. The causes for this memory behavior are analyzed in Section <ref>. To the best of our knowledge, our study is the first one to detect and analyze this suboptimal memory system behavior. -1 Although STREAM is the de facto standard for measuring sustained memory bandwidth <cit.>, Figure <ref> shows significant gap between the STREAM bandwidth and the maximum measurements of the Mess benchmark. The sources of this gap are analyzed in the next section. §.§ Performance characterization: Actual systems We use the Mess measurements to compare the memory system performance of Intel, AMD, IBM, Fujitsu and Amazon servers as well as NVIDIA GPUs with DDR4, DDR5, HBM2 and HBM2E. The platform and memory system characteristics are listed in Table <ref> while Figure <ref> shows their bandwidth–latency curves. The unloaded memory latency varies significantly between different platforms. It ranges from 85 ns in the Cascade Lake server with DDR4, to 122 ns in the A64FX servers with HBM2 and 129 ns in the Graviton 3 with DDR5 main memory. This difference should not be directly associated to the memory technology and standard. For example, the AMD Zen2 comprises DDR4 technology with practically the same command latencies (in nanoseconds) as the Intel Cascade Lake server, and still it shows almost 30 ns higher unloaded memory latency. This is because the load-to-use latency considers the time memory requests spend within the CPU chip, including the cache hierarchy and network on chip. These timings can differ significantly between different CPU architectures independently of the target memory system. Actually, we detect the highest unloaded memory latencies in the chips with the largest number of cores (Zen2, A64FX, Graviton 3) indicating that at least a portion of this latency is likely to be attributed to the network on chip which is larger and more complex in these architectures. NVIDIA H100 GPU shows higher unloaded latency due to its massive number of arithmetic processing units, lower on-chip frequency, and complex memory hierarchy <cit.>. We also detect a wide maximum latency range between different platforms, and even within a single platform for different read and write memory traffic. Maximum memory latency is a primary concern in real-time systems, and it is less critical in high-performance computing (HPC). Still it has to be bound to guarantee quality of service of some HPC applications. We believe that our study may open a discussion about the sources of different maximum latencies in different systems, e.g. due to the different lengths of the memory queues, and about its desirable limits. -1 All platforms except AMD Zen2 CPU and NVIDIA H100 show a similar saturated bandwidth range, between approximately 70% and 90% of the maximum theoretical bandwidth. The maximum achieved bandwidth cannot reach the theoretical one because part of it is “lost” due to factors such as DRAM refresh cycles blocking the entire chip, page misses causing precharge and activate cycles, and timing restrictions at the bank, rank, and channel levels <cit.>. The efficiency of the memory bandwidth utilization also depends on the memory controller design. As expected (see Section <ref>), the best utilization is achieved for 100%-read memory traffic, and it reduces with the increment of the memory writes. AMD Zen2 is an exception in two ways. First, its saturated bandwidth range is significantly lower, 57–71% of the maximum theoretical one. Second, it does not follow the expected impact of the write traffic on the bandwidth utilization. Instead, the traffic with the maximum rate of memory writes shows a very good performance, very close to the 100%-read traffic, while the main drop is detected for a mixed, e.g. 60%-read/40%-write, traffic. -1 We detect the memory system over-saturation, in some AMD Zen 2, Intel Skylake and Intel Cascade Lake bandwidth–latency curves. In Amazon Graviton 3, Intel Sapphire Rapids and NVIDIA H100 the over-saturation is frequent for the memory traffic with high percent of writes.[These findings are consistent with two recent studies which report that running high-bandwidth benchmarks on all CPU cores may lead to lower memory bandwidths w.r.t. the experiment in which some cores are not used <cit.>.] We explored this behavior in the Cascade Lake servers in which we had access to additional memory system counters, e.g. monitoring the row-buffer statistics. In all the memory over-saturation experiments we detect a significant increase in row-buffer misses. In case of a row-buffer miss, the current content of the row is stored in the memory array and the correct row is loaded into the row-buffer. These additional operations increase the memory access time and reduce the effective device bandwidth, matching the over-saturated memory behavior. To confirm these findings, we increased the memory system pressure by removing four out of six DIMMs in each socket. In these experiments, we detected a large over-saturation areas in all bandwidth–latency curves. And indeed, the measurement confirmed that the over-saturation is highly correlated with the row-buffer miss rate. -1 Finally, we detect that in most of the platforms the bandwidths reported by the STREAM kernels are significantly below the Mess measurements.[The STREAM benchmark measures the performance of four kernels: Copy, Scale, Add and Triad. All the results presented in this study follow the Stream guidelines: they are obtained with the version of the benchmark running with 64-bit data types with no significantly modified code or assembly language <cit.>.]^,[In Graviton 3 and NVIDIA H100, the results reported by STREAM are very close to the maximum Mess measurements for the corresponding read/write ratio. This would correspond to a architecture with a write-through cache policy. We did not find any public document that details this aspect of the Graviton 3 and NVIDIA H100 cache organization.] In the Intel and A64FX platforms, the STREAM reports around half of the maximum theoretical bandwidth. In IBM Power 9, the STREAM reports only one third of it. There are two main sources of this gap. The first one is that the STREAM does not measure, but estimates the memory bandwidth. This estimate is based on the application execution time, size of the data structures used in the computation, and number of load and store instruction in each STREAM kernel. In particular, STREAM calculation assumes one memory read for each load instruction and one memory write for each store. This is not the case in most of the state-of-the-art HPC servers because they deploy a write-allocate cache policy. Each store instruction, therefore, does not correspond to a single memory write, as assumed in the STREAM bandwidth calculation, but to one read and one write (see Section <ref>). The second cause of the gap between the Mess and STREAM bandwidth is the composition of the memory traffic. The Mess benchmark achieves the maximum bandwidth for a 100%-read traffic. The write memory traffic, present in all STREAM kernels, adds timing constrains and reaches sooner the bandwidth saturation point. §.§ Performance characterization: Memory simulators Mess benchmark can also be used to characterize memory simulators and compare them with the actual systems they intent to model. We illustrate this Mess capability with the gem5, ZSim, and OpenPiton Metro-MPI simulators with different internal memory models and widely-used external memory simulators, DRAMsim3 and Ramulator. §.§.§ gem5 The gem5 <cit.> is a cycle-accurate full-system simulator. In our experiments, the simulator is configured to model the Graviton 3 server with 64 Neoverse N1 cores <cit.>. The cache hierarchy includes 64 KB of 4-way L1 instruction and data cache, 1 MB of 8-way private L2 cache and 64 MB of 16-way shared L3. The main memory system has eight eight DDR5-4800 memory channels. Figure <ref> compares Mess bandwidth–latency curves of the actual server with gem5 internal memory models: simple memory model and internal DDR model. To maintain a reasonable simulation time, we model each system with a family of six curves, from 50% to 100% read memory traffic with a 10% step. -1 Practically in the whole bandwidth range, the gem5 simple memory model delivers a fixed latency of 4–49 ns. The latency increases only when the bandwidth asymptotically approaches its theoretical maximum. Contrary to the Graviton 3 server, the highest latencies are measured for a 100%-read traffic, and the latency drops with the percent of memory writes. Also, unlike in the actual platform, for some memory traffic increasing the bandwidth reduces the memory access latency. For example, the 50%-read/50%-write traffic reaches the lowest simulated latency of only 4 ns at the 200 GB/s bandwidth. The same traffic in the actual system has the memory access latency of 261 ns. -1 The more detailed internal DDR model shows small improvements over the simple memory model, but still poorly resembles the actual system behavior. The simulated latencies are unrealistically low, most of them in the range of 14–100 ns. Similarly to the gem5 simple memory model, the latencies drops with the percent of memory writes. For all the curves exept 100%-read, the saturated bandwidth is significantly lower from the one measured on the actual system. Again, the error increases with the percent of memory writes. §.§.§ ZSim -1 We select ZSim <cit.> as a representative of event-based hardware simulators. We use publicly-available ZSim modeling 24-core Intel Skylake processor connected to six DDR4-2666 channels <cit.>. The cache hierarchy of the modeled CPU includes 64 KB of 8-way L1 instruction and data cache, 1 MB of 16-way private L2 cache and 33 MB of 11-way shared L3. The simulator is extensively evaluated against the actual hardware platform <cit.>. The ZSim comprises three internal memory models: fixed-latency, M/D/1 queue model and the internal DDR model. Also it is already connected to Ramulator <cit.> and DRAMsim3 <cit.>. To avoid any error due to the integration of an event-based CPU simulator with cycle-accurate memory models <cit.>, we use the ZSim interfaces included in the Ramulator and DRAMsim3 releases. We compare the Mess bandwidth–latency curves of the actual server with all five ZSim approaches for the main memory simulation (see Figure <ref>). We configure the fixed-latency model with the unloaded memory latency measured in the actual system. The M/D/1 queue model also requires user to specify the maximum theoretical bandwidth, which is 128 GB/s in the platform under study. The internal DDR model, Ramulator and DRAMsim3 are based on the detailed simulation of the DDR4-2666. As expected, the fixed-latency memory model provides a constant latency in the whole bandwidth domain. Given that this latency is configured by a user, it can be set to match the unloaded memory latency in the actual system. On the down side, the memory bandwidth provided by this model is unrealistic: the maximum simulated bandwidth is 342 GB/s, which exceeds the maximum theoretical one by 2.7×. The M/D/1 queues correctly model the memory system behavior in the linear part of the curves. The modeling of the system saturation is less accurate. The queue model does show some difference between read and write memory traffic, but the reported performance does not correspond to the actual system trend in which increasing the write traffic lowers the performance. The internal DDR model shows the closest performance match to the actual system — it models the linear and saturated segments of the curves, and the impact of the memory writes. Still, there are three main difference w.r.t. to the actual system. First, the simulator underestimates the saturated bandwidth area to 69–93 GB/s, significantly below the 92–116 GB/s measured in the actual system. Second, the simulator excessively penalizes the memory writes which is seen as a wider spread of the curves with a higher write memory traffic. Third, we detect some unrealistic memory-latency peaks in the low-bandwidth 1–4 GB/s curve segments. The DRAMsim3 shows a similar trend as the M/D/1 queue model in the linear segments of the memory curves, with some latency error, 52–63 ns in the DRAMsim3 versus 89–109 ns in the actual system. The simulator does not model the saturated bandwidth area. Finally, the Ramulator provides a fixed 25 ns latency in the whole bandwidth area and for all memory traffic configurations. Also, similar to the fixed-latency model, the simulated bandwidth is unrealistic, exceeding by 1.8× the maximum theoretical one. Our evaluation of the memory models and detailed hardware simulators detected major differences w.r.t. the actual memory systems performance. DRAMsim3 and Ramulator, which rely on detailed memory models, are considered de facto standard for the memory system simulation. Also, both of them are evaluated against the manufacturer's Verilog implementations and they show no violation of the JEDEC timings <cit.>. However, as our results demonstrate, this does not guarantee that the simulators properly model the full system performance. §.§.§ RTL simulators: OpenPiton and Metro-MPI OpenPiton framework <cit.> provides an open-source RTL implementation of a tiled architecture based on Ariane RISC-V cores <cit.>. Developed in the Verilog RTL, the OpenPiton simulation is slow, especially for large number of cores. We use the OpenPiton simulation accelerated by Metro-MPI <cit.>. This approach uses Verilator <cit.> to convert the RTL code of each tile into a cycle-accurate simulation model. Then, all the tiles are simulated in parallel and their interconnect communication is done with the MPI programming interface. In our experiments, the OpenPiton framework is configured to generate 64-core Ariane architecture which includes 16 KB of 4-way L1 instruction and data cache, and 4 MB of 4-way shared L2 cache. The main memory is originally modeled with a single-cycle latency, and it is recently extended with a fixed-latency model <cit.>. We set this latency to 170 ns. Our Mess measurements, see Figure <ref>, confirm that both models deliver the expected load-to-use latency. Also, as expected, we see no difference between read and write memory traffic, leading to a perfect overlap of the curves. The only difference is in the maximum observed memory bandwidth. For a single-cycle memory latency, 100%-read memory traffic achieves 32 GB/s, limited by the memory concurrency of the 64 in-order Ariane cores. Memory writes do not stall the cores, so the achieved memory bandwidth increases with the write memory traffic ratio. Still, a small 2-entry miss status holding registers (MSHR) limits the memory bandwidth to 47 GB/s for 50%-read/50%-write traffic. We detect the same trend for the fixed memory model. -1 The Mess evaluation of the OpenPiton Metro-MPI resulted in an unexpected discovery: in some experiments we detected significantly higher memory write traffic than anticipated. By analysis of the system behavior for various Mess configurations, we connected the extra memory traffic to the unnecessary eviction of the data from the last-level cache. Instead of evicting only the dirty cache lines, the system was evicting all of them. The source of the error is the coherency protocol generated by the OpenPiton framework. The error is reported to the OpenPiton developers and they confirmed its existence. Although not part of the original plan, our exploration discovered that holistic and detailed Mess performance characterization can be also used to uncover CPU design errors. § MESS SIMULATOR In this section we will present the Mess memory simulator and show how it significantly improves the memory simulation accuracy. §.§ Design -1 The CPU and memory simulators are typically connected in the following way: the CPU simulator issues the memory operations and the memory simulator determines their latencies. The Mess simulator does this based on the application's position in the memory bandwidth–latency curves. This process is complex due to the inherent dependency between the memory system latency, timings of the memory operations and all dependent instructions, and the generated memory bandwidth. We simplify the problem by designing the Mess simulator not to compute the exact memory latency for a given memory traffic, but to detect and correct discrepancies between the memory access latency and the simulated bandwidth. This approach, together with the fundamental principle of application's position in the memory bandwidth–latency curves, enables the Mess simulator to surpass the accuracy of all other memory simulators, while remaining simple and fast. The Mess simulator acts as a feedback controller <cit.> from the classical control theory <cit.>, illustrated in Figure <ref>. The CPU simulation can start from any memory access latency, e.g. the unloaded one. The feedback control loop monitors the simulated bandwidth and controls whether it correspond to the memory latency used in the CPU simulation. If this is not the case, the memory latency is being adjusted with an iterative process we describe later. The control process is performed at the end of each simulation window, which, in our experiments, comprises 1000 memory operations. This is much smaller than the length of the application phases <cit.>, so the error due to the transition between different application phases has a negligible impact on the simulator's accuracy. Figure <ref> describes one iteration of the Mess simulator control loop. We start with the Mess estimate of the application's bandwidth–latency position in the i^th simulation window, (messBW_i, Latency_i) 1. From that point on, all the issued memory requests are simulated with Latency_i. At the end of the simulation window, based on the outcomes of the CPU simulation, the Mess simulator monitors the simulated memory bandwidth, cpuBW_i 2, and compares it with the messBW_i estimated at the beginning of the window 3. If the simulation is in a steady-state and the application did not change its behavior, there will be no major difference between cpuBW_i and messBW_i. This confirms the consistency in the simulated memory access latency, the CPU timings and the achieved memory bandwidth. Therefore, the CPU simulation in the next window will continue with the same memory latency: Latency_i+1 = Latency_i. Otherwise, a difference between cpuBW_i and messBW_i suggests inconsistent simulated memory latency and bandwidth. This can happen, for example, if the application changes its behavior. Figure <ref> illustrates the case in which the application increases the frequency of memory request leading to the higher bandwidth: cpuBW_i > messBW_i 3. In this case, the simulated memory bandwidth cpuBW_i does not correspond to the memory Latency_i used in the CPU simulation. To address this inconsistency, the Mess simulator adjusts the predicted application position in the bandwidth–latency curves. The objective of this adjustment is not to reach the correct (BW, Latency) position in a single iteration. The next Mess estimate, (messBW_i+1, Latency_i+1), will be positioned in-between messBW_i and cpuBW_i 4. The exact position is determined based on the user-defined convergence factor: messBW_i+1 = messBW_i + convFactor × (cpuBW_i - messBW_i). The approach is based on the proportional–integral controller mechanism from the control theory <cit.>. Finally, the Mess uses messBW_i+1 to read the Latency_i+1 from the corresponding bandwidth–latency memory curve. The next simulation window starts with the Mess simulator providing memory access Latency_i+1 to the CPU simulator. §.§ Evaluation We evaluate the Mess simulator integrated with ZSim and gem5 against the actual hardware. We compare the simulated and actual bandwidth–latency curves as well as the performance of memory-bound benchmarks: STREAM <cit.>, LMbench <cit.>, and Google multichase <cit.>. §.§.§ ZSim Figure <ref> shows the DDR4, DDR5 and HBM2 bandwidth–latency curves measured with the ZSim connected to the Mess simulator.[Mess simulator also supports the Intel Optane technology. Optane's bandwidth–latency curves are measured on a 16-core Cascade Lake server with 6×DDR4-2666 16 GB and 2×Intel Optane 128 GB memory in App Direct mode. Since Intel Optane technology is discontinued since 2023, we do not analyze its performance characteristics and simulation.] The configurations of the simulators match the actual Intel Skylake with 24-core and six DDR4-2666 memory channels. The simulated Mess curves, depicted in Figure <ref>, closely resemble the actual memory systems behavior (Figure <ref>). The simulation error of the unloaded memory latency is below 1%, and it is around 3% for the maximum latencies. The difference between the simulated and the actual saturated bandwidth range is only 2%. Figures <ref> and <ref> show the ZSim+Mess simulation results for the high-end DDR5 and HBM memories. To saturate the 8-channel DDR5-4800 and 32-channel HBM2, we increase the number of simulated cores to 58 and 192, respectively. Again, the simulated bandwidth–latency curves closely resemble the performance of the corresponding actual memory systems (Figures <ref> and <ref>). -1 Figure <ref> shows the evaluation results, w.r.t. to the actual Intel Skylake server, of all six ZSim memory models when running memory intensive STREAM <cit.>, LMbench <cit.> and Google multichase <cit.>. The simulation errors are closely correlated with the similarity between the simulated and actual bandwidth–latency curves. The Mess shows the best accuracy with only 1.3% average error, followed by the M/D/1 and internal DDR model. The fixed-latency simulation and Ramulator show the highest errors of more than 80%. The Mess simulator is also fast. It increases the simulation time by only 26% higher w.r.t. a simple fixed-latency memory, and it is 2% and 15% faster than the M/D/1 and internal DDR model. The ZSim+Mess simulation speed-up over the ZSim+Ramulator and ZSim+DRAMsim3 is 13× and 15×. §.§.§ gem5 -1 Figure <ref> shows the DDR5 and HBM2 bandwidth–latency curves simulated with the gem5 connected to the Mess memory simulator. In all experiments, the gem5 is configured to model Graviton 3 cores <cit.> described in Section <ref>. To reduce simulation time, we simulate 16 CPU cores connected to a single memory channel.[Simulation of a whole server comprising 64 Graviton 3 cores and 8×DDR5-4800 requires more than five hours to obtain a single bandwidth–latency datapoint. The full simulation of the bandwidth–latency curves would require more than a year.] The simulated bandwidth–latency curves, when scaled to eight DDR5 channels or 32 HBM2 channels, closely resemble actual system behavior (Figures <ref> and <ref>). -1 We also evaluate the Mess memory simulation against the gem5's built-in simple memory model and internal DDR5 model when running STREAM, LMbench, and Google Multichase benchmarks. In these experiments we simulate a whole server comprising 64 Graviton 3 cores and 8×DDR5-4800, and compare the results against the benchmark executions on the actual server. The results are presented in Figure <ref>. The Mess simulator decreases the average error from 30% (gem5 simple memory model) and 15% (internal DDR5 model) to only 3%. Such a low error is unprecedented in any prior validation attempts <cit.>. Moreover, the gem5 with Mess simulator reduces the simulation time by 1.6% w.r.t. to much simpler built-in memory models. §.§ Simulation of novel memory systems: CXL memory expanders -1 The memory system complexity and the scarcity of publicly-available information often result in a considerable gap between a technology release and the support for its detailed hardware simulation. For example, public memory simulators started to support the DDR5 in 2023 <cit.>, three years after production servers with DDR5 DIMMs hit the market. -1 The Mess simulator provides a fundamental solution for this gap because it can simulate emerging memory systems as soon as their bandwidth–latency curves are available. For memory technologies available on the market the curves can be measured on a real platform. For emerging memory devices that are not yet available in off-the-shelf servers, the bandwidth–latency curve can be measured on a developer board with a prototype of the new device, or alternatively it can be provided by the manufacturers, e.g. based on their detailed proprietary RTL models. We will demonstrate the Mess simulation of novel memory systems with an example of the Compute Express Link (CXL) memory expanders. CXL is an emerging interconnect standard for processors, accelerators and memory devices. The CXL memory expanders enable a straightforward enlargement of the memory system capacity and bandwidth, as well as the exploration of unconventional disaggregated memory systems <cit.>. One of the main limitations for an academic research in this field, however, is the lack of reliable performance models for these devices. The Mess simulation is performed with the CXL memory expanders bandwidth–latency curves provided by Micron Technology based on their detailed hardware model. In particular, we model a CXL memory expander connected to the host via the CXL 2.0 PCIe 5.0 interface with 1×8 Lanes. The device comprises one memory controller connected to a DDR5-5600 DIMM with two ranks. All the CXL modules, Front end, Central controller and Memory controller, are implemented in SystemC. The modules communicate by using the manufacturer's proprietary Transaction Level Modeling (TLM) framework, which is based on SystemC TLM <cit.>. -1 The obtained bandwidth–latency curves are shown in Figure <ref>. The figure plots the round-trip latency from the CXL host input pins. To consider a full load-to-use latency, a user should add the round-trip time between the CPU core and the CXL host. We measure this latency component with the Intel MLC <cit.>. The CXL memory expanders show a similar performance trends as the DDRx or HBM memory systems: latency that increases with the system load, significant non-linear increase after a saturation point, and the impact of the traffic read/write ratio. One major difference is that the CXL interface provides the best performance for a balanced reads and writes traffic, while its performance drops significantly for the 100%-read or 100%-write traffic. This is because, unlike the DDRx interfaces, CXL is a full-duplex interconnect with independent read and write links. Therefore, the CXL can transmit simultaneously in both directions, but in the case of the unbalanced traffic one CXL transmission direction could be saturated while other direction is negligibly used. We use the obtained CXL memory bandwidth–latency curves in the Mess simulator integrated with ZSim, gem5 and OpenPiton Metro-MPI simulators (see Figure <ref>). To reduce long OpenPiton Metro-MPI simulation time, we model only 25 curves with a small number of experimental points in each curve. For this reason some segments of the curves are discrete. Nevertheless, the OpenPiton Metro-MPI simulations match the general trend and the saturated bandwidth range of the manufacturer's curves. The maximum latency range is below the manufacturers CXL curves because the simulated small in-order Ariane cores with only 2-entry MSHRs cannot saturate the target memory system. This behavior is already detected and discussed in Section <ref>. ZSim and gem5 results practically match the manufacturer's CXL curves. In Appendix <ref>, we compare our CXL simulation platform against prior approach to emulate memory-over-CXL. § MESS APPLICATION PROFILING The Mess framework also enhances the memory-related application profiling. We demonstrate this functionality with the Mess extension of Extrae and Paraver, production HPC performance tools for detailed application tracing and analysis <cit.>. The Mess application profiling adds a new layer of information related to the application's memory performance metrics. This information can be correlated with other application runtime activities and the source code, leading to a better overall understanding of the application's characteristics and behavior. §.§ Background: Extrae and Paraver -1 Paraver is a flexible data browser for application performance analysis <cit.>. It can display and analyze application MPI calls, duration of the computing phases, values of the hardware counters, etc. Paraver can also summarized application behavior in histograms and link it with the corresponding source code. The input data format for Paraver is a timestamped trace of events, states and communications <cit.>. For parallel applications, the traces are usually generated with the Extrae tool <cit.>. Extrae automatically collects entry and exit call points to the programming model runtime, source code references, hardware counters metrics, dynamic memory allocation, I/O system calls, and user functions. It is is compatible with programs written in C, Fortran, Java, Python, and combinations of different languages, and it supports a wide range of parallel programming models. Extrae is available for most UNIX-based operating systems and it is deployed in all relevant HPC architectures, including CPU-based systems and accelerators. §.§ Use cases We illustrate the capabilities of the Mess application profiling with an example of the memory-intensive HPCG benchmark <cit.> running in a dual-socket Cascade Lake server (Table <ref>). We fully utilize the one CPU socket by executing 16 benchmark copies, one on each core. Extrae monitors the application memory behavior with a dedicated profiling process which traces the memory bandwidth counters. The sampling frequency is configurable, and it is 10 ms by default. Even with this fine-grain profiling, the introduced overhead is negligible, below 1%. The extended Paraver tool correlates the application memory bandwidth measurements with Mess memory curves. The application measurements are plotted on the curves as a set of points, each of them corresponding to 10 ms of the application runtime. The application memory use can be also incorporated into the Paraver trace file, so a user can analyze its evolution over time, and correlate it with other application's behavior and the source code. §.§.§ Bandwidth–latency curves Figure <ref> depicts the Mess memory-related profile of the HPCG benchmark. Most of the HPCG execution is located in the saturated bandwidth area, above 75 GB/s. Sporadically, the benchmark even reaches the maximum sustained bandwidth with peak memory latencies in the range of 260–290 ns. Also, each HPCG point on the curves is associated with a memory stress score. The score value ranges from 0, for the unloaded memory system, to 1, corresponding to the right-most area of the bandwidth–latency curves. Memory stress score in a given point is calculated as a weighted sum of two parameters, the memory latency and the curve inclination. The latency itself is a good proxy of the system stress, while the inclination shows the memory system sensitivity to a bandwidth change. Gentle inclination indicates that a memory bandwidth change would have a minor impact on the memory access latency and the overall performance. In the steep curve segments, e.g. 95–100 GB/s area in Figure <ref>, small bandwidth changes can rapidly saturate the memory system leading to a major latency increase. The Mess extension of Paraver already includes the stress score visualization with a green–yellow–red gradient that can be easily interpreted by application developers. §.§.§ Timeline analysis Once the memory stress score is incorporated into the application's Paraver trace, it can be combined with other aspects of the application analysis, as illustrated in Figure <ref>. The figure analyzes around two seconds of the HPCG runtime, from 241,748,818 μs to 243,728,242 μs (x-axis). Guided by the sequence of MPI calls illustrated in Figure <ref> (top), we identify the application's main iterative loop and, using MPI_Allreduce (pink) as delimiter, we select two iterations to focus the analysis on the compute phases (Figure <ref>, middle). The color gradient corresponds to the length of the compute phase: green to blue gradient for short to long phases. Figure <ref> (bottom) shows the memory stress score for this region. The longest compute phases (blue) exhibits two distinct memory pressure behaviors: at the start of the phase, the memory stress score rises to 0.71, and then halfway through the phase it decreases to 0.64. The fine-grain application profiling can detect different memory stress score values even within a single compute phase. §.§.§ Links to the source code Extrae also collects callstack information of the MPI calls, referred to as the MPI call-points,[A callpoint refers to the file and line number where the program initiates an MPI call at a given level of the callstack, typically the last one. This point serves as a boundary for a region that begins with the current MPI call and extends until the next one. In the tables, we indicate the starting callpoint.] which are used to link the application runtime behavior with the source code. With the Mess application-profiling extension of Paraver, the application source code can be linked to its memory-related behavior. This is fundamental for making data placement decisions in heterogeneous memory systems, e.g. comprising DDRx DIMMs and HBM devices <cit.>. § RELATED WORK Mess framework provides a unified view of the memory system performance that covers the memory benchmarking, simulation and application profiling. Although these three memory performance aspects are inherently interrelated, they are currently analyzed with distinct and decoupled tools. §.§ Memory system benchmarks Memory access latency and utilized bandwidth are commonly treated as independent concepts measured by separate memory benchmarks. LMbench <cit.> and Google Multichase <cit.> measure the load-to-use latency in an unloaded memory system, while STREAM <cit.>, STREAM2 <cit.>, Hopscotch <cit.>, CAMP <cit.>, and HPCG <cit.> measure the maximum obtainable memory bandwidth or performance of the application kernels that are proportional to it. Only recently the community started to make the first steps in measuring latencies in loaded memory systems. The Intel Memory Latency Checker (MLC) tool <cit.> is used to show how memory access latency increases for higher used memory bandwidths <cit.> and to compare systems based on fundamentally different memory technologies, such as the DRAM and Optane <cit.>. X-Mem benchmark <cit.> reports loaded access latencies for cache subsystem, main memory, and NUMA memory nodes. The impact of the read and write traffic to the memory system performance is not measured nor analyzed. Overall, current memory systems benchmarks provide a small number of data points in a very large and complex memory-performance space. The Mess benchmark is designed for holistic close-to-the-hardware memory system performance characterization that is easily adaptive to different target platforms. It significantly increases the coverage and the level of detail of the previous tools, leading to new findings in memory behavior of the hardware platforms and simulators under study. §.§ Memory system simulation The Mess framework tightly integrates the memory performance characterization into the memory simulation. We compare the Mess simulator with the state-of-the-art memory models included in the CPU simulators <cit.> as well as the external cycle-accurate memory simulators <cit.>. The Mess memory simulator is fast, accurate and easily-integrated with the CPU simulators. It can support novel memory systems as soon as their detailed bandwidth–latency curves are measured on actual production systems or prototypes, or provided by the manufacturers. Therefore, Mess can simulate high-end and future memory systems much sooner than the standard memory simulators which consider detailed memory device sequences and timings <cit.>. Mess is the first memory simulator to support CXL memory expanders. Apart from the hardware simulation, system performance can be analyzed with analytical models. The PROFET model <cit.> and Interval model <cit.> predict memory system impact on the overall application performance. The predictions are based on the application runtime profiling combined with an analytical performance model. The main objective is to provide an alternative to complex and slow CPU simulations. This is orthogonal and complementary to the Mess simulator that targets the main memory simulation. §.§ Application profiling ProfileMe <cit.> and PerfMemPlus <cit.> determine whether the application is memory bound based on its memory access latency, measured with the Intel's Event Based Sampling (PEBS) <cit.>. The Roofline model <cit.> analyzes compute performance and memory bandwidth. The application is classified as compute or memory bound based on the comparison with the performance roofs of the target hardware platform. The Top-down model <cit.> analyzes the application CPI stack. The application is categorized as memory bound if its significant CPI component is caused by the main memory accesses. The model also distinguishes between the memory latency and bandwidth stalls depending on the occupancy of the memory controller queues. Below 70% occupancy, the stalls are considered latency-bound; above this threshold they are bandwidth-bound. As a part of future work, we will synthesize the Mess application profiling with the Roofline and Top-down models. The Mess and Roofline integration will connect the application memory bandwidth–latency analysis with its compute performance. The Top-down model will provide the CPI stack. Also, the Top-down distinction between the memory latency and bandwidth stalls will be enhanced with the Mess detailed bandwidth–latency analysis. § CONCLUSIONS The Memory stress (Mess) framework offers a comprehensive and unified approach to memory system benchmarking, simulation, and application profiling. It is publicly-released and ready to be used by the community for better understanding of the current and exploration of future memory systems. The Mess benchmark provides a detailed and holistic and memory systems performance view represented by a family of bandwidth–latency curves. The current Mess benchmark release covers x86, ARM, Power, RISC-V, and NVIDIA's PTX ISAs, and we used it to characterize servers from Intel, AMD, IBM, Fujitsu, Amazon, and NVIDIA with DDR4, DDR5, HBM2, and HBM2E main memory. The Mess characterization significantly expands coverage of the existing tools, uncovering new insights into the memory systems performance and behavior. We also used Mess to benchmark established publicly-available memory models and simulators. Our detailed evaluation shows that the existing memory models poorly resemble the actual system performance, leading to high error rates when simulating memory intensive workloads. The Mess memory simulator couples memory simulation to memory performance benchmarking and predicts the memory access latencies based on the application position on the bandwidth–latency curves. A memory system can be, therefore, simulated as soon as its bandwidth–latency curves are available, which removes the current lag between the emergence of memory technologies and development of the reliable simulation models. The Mess simulator closely matches the actual memory systems performance and it shows an unprecedented low error of 0.4–6% when simulating widely-used memory benchmarks. It is also fast and reduces the simulation time of cycle-accurate memory simulators by 13–15×. The Mess simulator is easily integrated with ZSim, gem5, and OpenPiton Metro-MPI CPU simulators and it supports a wide range of memory systems based on DDR4, DDR5, Optane, HBM2, HBM2E and CXL memory expanders. -1 The Mess application profiling determines application position on the corresponding memory bandwidth–latency curves. This information, correlated with other runtime activities and source code, enhances overall understanding of application behavior. The integration of Mess application profiling into a suite of production HPC performance analysis tools further enhances its utility and accessibility. The benchmark increases the coverage and the level of detail of all previous tools and provides a holistic view of the memory system performance. We perform the Mess characterization of Intel, AMD, IBM, Fujitsu and Amazon servers with DDR4, DDR5 and HBM2. We report and discuss a wide range of memory system behavior even for the hardware platforms based on the same standard. The Mess benchmark is released as open source and it is ready to be used in x86, Power, ARM and RISC-V architectures. The release package also contains all bandwidth–latency measurements shown in the paper including the CXL expander curves provided by the memory manufacturer. We also use the Mess benchmark to evaluate memory models of ZSim, gem5 and OpenPiton Metro-MPI hardware simulators versus the memory system performance of the actual hardware platforms. We detect serious errors and limitations even in the widely-used detailed memory models that are considered de facto standard for the memory system simulation. We also detected cases in which holistic and detailed Mess performance evaluation can diagnose CPU design errors. In particular, we detected a bug in the coherency protocol generated by the OpenPiton framework. The error is reported and acknowledged by the OpenPiton developers. Apart from the memory system characterization, the the Mess bandwidth–latency curves can be used for the memory performance simulation. We integrate and release as open source the Mess memory model in the ZSim, gem5 and OpenPiton Metro-MPI simulators, enabling simulation of memory systems based on DDR4, DDR5, Optane and HBM2 technologies, and the CXL interconnect. The Mess memory model is easy to integrate, it is fast and it closely resembles the actual memory system performance. The Mess model can support novel memory systems as soon as their detailed bandwidth–latency curves are measured on actual production systems or prototypes, or provided by the manufacturers. Therefore, Mess can model high-end and future memory systems much sooner than the standard memory simulators which estimate performance based on detailed analysis of memory protocols and timings. To the best of our knowledge, Mess is the first memory model that enables a detailed simulation of CXL memory expanders. Finally, the Mess memory bandwidth–latency curves can be used to enhance application profiling. We demonstrate this functionality with the Mess integration with Extrae and Paraver, production HPC performance tools for detailed application tracing and performance analysis. The public releases of these tools already include the Mess extension. The Mess extension adds a new layer of information related to the application's memory performance metrics. This information can be correlated with other application runtime activities and the source code, leading to a better overall understanding of the application's behavior. unsrt §.§ Mess benchmark implementation In this appendix, we discuss Mess benchmark implementation. It includes implementation of pointer-chase and traffic generator. The scripts and manuals for the experimental setup, execution and measurements is included in the public Mess repository. The measurements post-processing removes the outliers, mitigates the noise and plots the results. §.§.§ Pointer-chase The pointer-chase contains a sequence of dependent back-to-back load instructions that access the main memory. Since the instructions are dependent, their execution is serialized, so the average memory access latency is calculated as the ratio between the total execution time and the number of instructions executed. This is guaranteed by the pointer-chase design. In the benchmark initialization, we allocate a contiguous section of memory and initialize it in such a way that a given array element contains the address of the next array element (memory location) that we want to access. This ensures the dependency between the consecutive load instructions. To force accesses to the main memory (and not on-chip caches), the pointer-chase traverses the whole array whose size exceeds the last-level cache. To avoid any cache-line spatial locality, each array element occupies the whole cache line (64 Byte). Finally, to diminish the impact of data prefetching and temporal locality during traversing, the pointer-chase traverses the array in a random pattern. The pointer-chase source code is detailed in Listing <ref>: * Lines 0001–0004 initialize the benchmark: set the number of load instructions, initialize the registers, etc. * Lines 0006–1007 are the core of the benchmark. They are a series of back-to-back load instructions for traversing the array. Each load instruction has data dependency on the previous one, so their execution is serialized. * Line 1008 decrements the loop counter. * Line 1009 checks the loop exit condition. §.§.§ Traffic generator We write the core of the memory traffic generator directly in assembly. This enables the development of more precise code (changes at the level of the assembly instruction) and prevents any system software optimizations (e.g. compiler optimizations). The benchmark generates different read and write memory traffic by executing a different mix of load and store instructions in the source code, each of them in a separate assembly file. In the current implementation, the benchmark can execute from 100% load to 100% store operations, with a step of 2%.[The ratio of loads and stores executed by the benchmark does not lead to the same ratio of the reads and writes in the memory traffic. This is because most of the state-of-the-art HPC servers deploy a write-allocate cache policy. With this policy each store operation targeting data in main memory is first read into the cache (memory read), then modified, and finally written to the main memory once the cache line is evicted. Since each store operation causes one memory read and one memory write, e.g. 100% store kernel causes 50%-read 50%-write memory traffic.] The generated memory bandwidth is adjusted the issue rate of the memory operations. This is done by interleaving the load/store operation sequence with a configurable dummy loop of operations. To reduce the overhead of the loop management, loop counter decrement and jump operations, we employ a long loop of around 100 load/store operations. Listings <ref>, and <ref> show the memory traffic generator implementation in the x86 ISA. The benchmark is also available for ARM, IBM Power, and RISC-V ISAs. * Lines 001–008 initialize the benchmark: * Lines 001–002 loads the addresses of the arrays and into the registers. * Lines 003–004 initialize a register as the loop counter. * Lines 005–006 set the size of the array and . * Lines 007–008 set the iteration of the dummy function. This controls the rate of the memory operations, i.e. used memory bandwidth. * The main loop of the kernel starts at line 009. The loop comprises a series of load and store instructions (Lines 010–015) followed by the call to the dummy function (Lines 016–017). * The function is shown in Listing <ref>. The number of operations is configurable: the larger the number of operations in each function call, the lower the rate of memory instruction, and therefore the lower used memory bandwidth. This configuration is managed by the value of parameter. When parameter is set to , in Listing <ref>, the function is finalized without entering the loop. This causes only a negligible halt in the load and store sequence, leading to the maximum pressure to the memory system. As we increase the value of the parameter, we increase the number of the iterations in the dummy loop, and therefore reduce the rate of the memory operations issued by the memory traffic generator. * The sequence of interleaved load and store instructions, and calls to the function is repeated until the line 143. This is the core of the benchmark. In the current implementation, each iteration of the main loop comprises 100 load and store instructions. * Lines 144–146 finalize the loop iteration: increment the loop counter, check the exit condition, and conditionally jump to the beginning of the loop. §.§ Case study: Remote-socket memory emulation of CXL memory expanders Recent industrial studies emulate CXL memory expansion in cloud and datacenter servers with conventional dual-socket systems: one socket is used as a host CPU and the other socket as a CPU-less memory expander <cit.>. To evaluate this approach, we perform a detailed simulation of both systems. The CPU host is simulated with the ZSim configured to model the Intel 24-core Skylake processor (Section <ref>). We use Micron Technology's bandwidth–latency curves to model the CXL memory expander, and measurements from the actual dual-socket server to model the remote-socket memory. In each system configuration, we simulate 500 billion instructions of the multiprogramed SPEC CPU2006 workloads <cit.>. Figure <ref> plots the CXL (green) and remote-socket (blue) memory curves, and the benchmark behavior. Each 20.000 simulated instructions we monitor the benchmark read and write memory bandwidth, and then plot each observation as a point on the corresponding bandwidth–latency curve. The figure shows two characteristic use-cases. The perlbench benchmark (Figure <ref>) has a low bandwidth utilization. In this area remote-socket memory curves shows approximately 28 ns higher latency, which leads to somewhat lower performance w.r.t. target CXL system. We detect the opposite behavior for high-bandwidth benchmarks. The remote-socket curves have a higher bandwidth saturation area. This leads to higher achieved bandwidths and the overall performance of the bandwidth-intensive benchmarks, such as the lbm illustrated in Figure <ref>. The overall performance trend is illustrated in Figure <ref>. The figure plots the results for all SPEC CPU2006 benchmarks, sorted from the lowest to the highest bandwidth utilization. For low-bandwidth benchmarks, remote-socket emulation provides up to 12% lower performance w.r.t. to the target CXL system. Both system provide a similar performance for the benchmarks with medium bandwidth utilization between 30% and 50% of CXL max theoretical bandwidth. For the high-bandwidth benchmarks, the remote-socket memory provides 11%–22% higher performance. To the best of our knowledge, this is the first study that enables a detailed simulation of CXL memory expanders based on the manufacturer's SystemC model. The model is already integrated and publicly released with ZSim, gem5 and OpenPiton Metro-MPI simulators. We also provide the performance analysis and correction factors that can be useful for future studies who want to keep useing remote-socket memory emulation, e.g. due to its much faster execution time.
http://arxiv.org/abs/2405.09866v1
20240516074315
Rethinking Multi-User Semantic Communications with Deep Generative Models
[ "Eleonora Grassucci", "Jinho Choi", "Jihong Park", "Riccardo F. Gramaccioni", "Giordano Cicchetti", "Danilo Comminiello" ]
eess.SP
[ "eess.SP", "cs.LG" ]
gobble ŁØ _ıȷłø _ +# E M H K X Y G Q F H T d ℤ ℙ ℝ ℕ ℂ 𝔼 𝕎 definitionDefinition mypropertyProperty mytheoremTheorem mylemmaLemma myconjectureConjecture corollaryCorollary myproblemQuestion myobservationObservation myremarkRemark myalgorithmAlgorithm myassumptionAssumption myexampleExample def = = ·≈ ·= t x y c d b q g i s a r u m n hφ p z o e f v w A B C D E F G H I J K L M N S V P Q R W U X Y Z 1ξ0̱ 0ΦΨΓΣΔΩΛÂ A C L D B P Q Z R I N S T U W R D EB H I SINR MSE SNR =2500 Rethinking Multi-User Semantic Communications with Deep Generative Models Eleonora Grassucci, Jinho Choi, Fellow, IEEE, Jihong Park, Senior, IEEE, Riccardo F. Gramaccioni, Student, IEEE, Giordano Cicchetti, Student, IEEE, Danilo Comminiello, Senior, IEEEE. Grassucci, R. F. Gramaccioni, G. Cicchetti, and D. Comminiello are with the Dept. of Information Engineering, Electronics, and Telecommunications of Sapienza University of Rome, Italy. Emails: {eleonora.grassucci, riccardofosco.gramaccioni, giordano.cicchetti, danilo.comminiello}@uniroma1.it. J. Choi and J. Park are with are with the School of Information Technology, Deakin University, Australia. Emails: {jinho.choi, jihong.park}@deakin.edu.au. This work was partially supported by the European Union under the Italian National Recovery and Resilience Plan (PNRR) of NextGenerationEU, partnership on “Telecommunications of the Future” (PE00000001 - program RESTART). Received – April 2024 / Accepted — ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In recent years, novel communication strategies have emerged to face the challenges that the increased number of connected devices and the higher quality of transmitted information are posing. Among them, semantic communication obtained promising results especially when combined with state-of-the-art deep generative models, such as large language or diffusion models, able to regenerate content from extremely compressed semantic information. However, most of these approaches focus on single-user scenarios processing the received content at the receiver on top of conventional communication systems. In this paper, we propose to go beyond these methods by developing a novel generative semantic communication framework tailored for multi-user scenarios. This system assigns the channel to users knowing that the lost information can be filled in with a diffusion model at the receivers. Under this innovative perspective, OFDMA systems should not aim to transmit the largest part of information, but solely the bits necessary to the generative model to semantically regenerate the missing ones. The thorough experimental evaluation shows the capabilities of the novel diffusion model and the effectiveness of the proposed framework, leading towards a GenAI-based next generation of communications. Multi-User Communication, Diffusion Models, Generative Semantic Communication 28pt § INTRODUCTION In the last few years, the number of connected devices has incredibly grown worldwide, posing new challenges for communication systems. As an example, according to EUROSTAT [<https://ec.europa.eu/eurostat/web/products-eurostat-news>] , the amount of connected smart TVs in Europe has increased from 43% in 2020 to 52% in 2022, and the number of smart wearables from 17% to 26% in the same range of time. Moreover, the average percentage of employed people using digital devices for the entirety or most of their working time is approximately 30% in Europe, with some peaks higher than 40%. In addition, most of these devices are connected at the same time, introducing novel difficulties for managing multi-user communications. Meanwhile, a novel paradigm has emerged laying its foundations in the three Weaver levels of communication. Here, the first level is the technical level, whose aim is to manage the technical aspects of the transmission. The semantic level is placed right after the first one and focuses on understanding what to transmit rather than how to do it. Finally, the last one regards the effectiveness of the communication. Therefore, the so-called semantic communication was born from the middle level and aims at transmitting and reconstructing the meaning (i.e., the semantics) of the message without necessarily recovering the original bitstream <cit.>. Therefore, semantic communication frameworks can reduce bandwidth requirements and latency by transmitting only the key, semantic, information of the content. Concurrently, semantic communication has been discovered to be a fertile layout for involving generative models in communication frameworks <cit.>. Such models are among the most impressive and promising branches of artificial intelligence, possessing the ability to generate almost any kind of multimedia content ranging from text (large language models) to audio and video (diffusion models). Their key feature for semantic communication is the ability to generate content including images <cit.> and audio <cit.> or speech <cit.> from extremely compressed information such as text <cit.> or lower-dimensional and quantized data representations <cit.>, further reducing the amount of information to transmit in a communication system and expanding the range of possible applications <cit.>. The combination of the revolutionary ability of generative models with the semantic communication paradigm gave birth to the generative semantic communication research field <cit.>, which is showing promising and revolutionary results. However, none of these methods consider the multi-user scenario yet, in which conventional deep learning models have instead shown interesting performance <cit.>. Most of the generative semantic communication approaches focus on inserting generative models in the semantic level of conventional communication frameworks, leveraging their capabilities to reduce bandwidth requirements and latency while improving perceptual results at the receiver. In this paper, instead, we propose to go further with such methods and assign resources and communication channels to multiple users given the knowledge that the system can exploit generative models ability to regenerate missing portions of the message at each receiver. Therefore, under the proposed new paradigm, in the case of multi-user congestion, the channel-user assignment should not aim at receiving the largest part of information implying multiple transmissions over time. Rather, it should be limited to the transmission of the sufficient information required to the generative model for regenerating the missing content. More technically, we propose to express the problem of multi-user semantic communication as an inverse problem and leverage the capabilities of diffusion models to solve it <cit.>. An effective way of solving inverse problems such as inpainting and denoising is the null-space decomposition <cit.> that has demonstrated its effectiveness in deep neural networks <cit.> and in diffusion models for both images <cit.> and audio semantic communication <cit.>. Therefore, to solve the inverse problem, we formulate the diffusion model sampling algorithm according to the null-space decomposition theorem to directly and formally match the generative algorithm with the scenario of multi-user semantic communication. The proposed sampling strategy can be adopted with any pretrained diffusion model, making it extremely flexible to be involved in any pre-existing generative semantic communication framework based on diffusion models. Following the proposed method, generative models can play a key role in reshaping semantic communication systems in multi-user scenarios towards a GenAI-based next generation of communications. More concisely, our contributions are four-fold: * We propose a novel method to rethink multi-user semantic communications leveraging the potential of state-of-the-art diffusion models. * We provide a formulation that matches the muli-user semantic communication problem and the proposed diffusion model generative algorithm. * We solve the multi-user Orthogonal Frequency Division Multiple Access (OFDMA) problem by designing an effective diffusion model equipped with a novel null-space decomposition sampling method. * We prove the effectiveness of the proposed approach in multiple scenarios, across different number of subcarriers and channel noises. The rest of the paper is organized as follows. Section <ref> reports the works related to semantic communication and diffusion models, while Section <ref> sets the problem and explains the theory behind diffusion models. Then, the proposed method is introduced in Section <ref> and validated in Section <ref>, while conclusions are drawn in Section <ref>. § RELATED WORKS In this section, we introduce the related works to semantic communication and diffusion models including the recent generative semantic communication frameworks that are based on diffusion models. §.§ Semantic Communication Semantic communication is expected to play a key role in future communication systems beyond 5G and 6G <cit.>. The idea behind this new paradigm is to focus on transmitting the semantics of message or data, which is expected to contain the meaning and key features of the original content, rather than the entire original content. Such semantic information is often very small in size and invariant to perturbation as compared with its original data. Then, the receiver can leverage the semantic information to restore or regenerate the message, or directly involve the semantics to accomplish specific goals or perform certain tasks <cit.>. Under this presupposition, semantic communication frameworks are expected to reduce the bandwidth required for the transmission and improve robustness in poor channels. As a result, semantic communication systems could be applied to diverse applications, ranging from image transmission <cit.>, to speech <cit.>, video compression and transmission <cit.>, and it is expected to explode in much more fields of applications <cit.>. §.§ Diffusion Models Denoising diffusion probabilistic models <cit.> (diffusion models, in short) have recently become the state of the art for generating multimedia content ranging from images <cit.> and audio <cit.> to video <cit.>, usually conditioned on some user-friendly representation such as textual description <cit.> or sketches <cit.>. We can identify three key elements responsible for diffusion models success and widespread. First, the generation ability of diffusion models crucially outperforms the capabilities of other models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs) <cit.>. Second, the sampling process of diffusion models is far more stable than the generation process of GANs <cit.>, making it a more reliable method compared to them. Lastly, in symbiosis with the development of novel theoretical diffusion models, many free-to-public demos have been released, equipped with models pretrained on extra-large-scale data that have a great ability to generalize on different tests. Open source availability crucially boosts the adoption and usability of diffusion models, allowing them to be incorporated into other novel frameworks and a plethora of applications. Together, these three aspects also favor the adoption of diffusion models in semantic communication scenarios <cit.> or multi-user ones <cit.>, whose models reliability is crucial and where their ability to regenerate content from extremely compressed information perfectly fits the semantic communication scenario. Indeed, generative semantic communication frameworks based on diffusion models have clearly outperformed <cit.> existing methods based on GANs <cit.> and VAEs <cit.>. § PROBLEM FORMULATION In this section, we present the problem formulation for applying semantic communication to multiuser systems. Consider a system comprising a base station and K users for downlink transmission. Let 𝐱_k ∈ℂ^M denote the signal (represented as a vector of length M) to be received by user k. The transmitted signal by the base station through the downlink channel is given by 𝐲 = ∑_k=1^K 𝐅_k 𝐱_k ∈ℝ^L, where 𝐅_k ∈ℂ^L× M is the channel assignment matrix to use k and L ≥ M. Here, a radio resource block of length L is used to transmit K users' signals of length M. Throughout the paper, we assume OFDMA <cit.>. Thus, each element of is transmitted through a subcarrier. Note that it is necessary to have _k^H_l = 0̱, k l, to ensure orthogonal channel allocations. Let 𝐇_k be the L × L channel matrix of user k, which is diagonal in OFDMA. That is, the lth diagonal element of _k is the (frequency-domain) channel coefficient of subcarrier l from the base station to user k. It is also assumed that Time Division Duplex (TDD) is used so that the channel matrices are known at the transmitter (i.e., the base station), thanks to the channel reciprocity. Then, the received signal at user k over channel 𝐇_k is given by 𝐫_k = 𝐇_k 𝐲 + 𝐧_k = 𝐇_k 𝐅_k 𝐱_k + 𝐇_k ∑_l ≠ k𝐅_l 𝐱_l + 𝐧_k, where 𝐧_k is the background noise at user k, for simplicity we consider additive white Gaussian noise. If the base station wishes to send all the signals with orthogonal channel allocations, as shown in eq. (<ref>), it can be shown that L = K M. That is, in OFDMA, in order to support K users with signal vectors to be transmitted over M subcarriers, there should be a total of L = K M subcarriers. To potentially reduce the number of necessary subcarriers, two key factors should be considered: F1) Due to varying multipath fading among users, the channel matrices differ, and certain subcarriers of specific users may experience deep fading. Therefore, as assumed previously, since the base station possesses knowledge of the channel matrices, it can allocate channels to circumvent deep fading. F2) In addition to channel-adaptive allocations, it is possible to reduce the dimension of the signal using deep generative models. Taking into account the aforementioned factors, we assume that each user is allocated N subcarriers, where N < M. Let 𝐁_k ∈ℂ^L × M represent the reduced-dimension channel allocation matrix of user k. This matrix can be considered a submatrix of 𝐅_k. Moreover, it is crucial to note that these channel matrices are determined based on the channel conditions of each user, aiming to avoid deep fading. In essence, each user can only utilize subcarriers with sufficiently high channel gains. Furthermore, to ensure orthogonal allocations, the following condition must hold: _k^H_l = 0̱, k l. Therefore, a total of L = KN (not KM) subcarriers are required for downlink transmissions. Since N < M, it is clear that some parts of the signal, _k, cannot be transmitted. These parts can be generated using deep generative models. To this end, let 𝐀_k = 𝐇_k 𝐁_k. Then, we have 𝐱_k = 𝐀_k^†𝐀_k 𝐱_k + (𝐈 - 𝐀_k^†𝐀_k) 𝐱_k . At user k, _k is known (as the reduced-dimension channel allocation matrix, _k, and the channel matrix, _k, are known), and from eq. (<ref>), _k _k (with the background noise) can be obtained. Due to the orthogonal channel allocations in eq. (<ref>), the signals to the other users can be removed by applying 𝐀_k^† to _k. In other words, 𝐀_k^†𝐫_k = 𝐀_k^†𝐀_k 𝐱_k + 𝐀_k^†𝐧_k. If a generative model at user k is able to generate the missing part, i.e., (𝐈 - 𝐀_k^†𝐀_k) 𝐱_k, then from eq. (<ref>), an estimate of 𝐱_k can be given by 𝐱̂_k = 𝐀_k^†𝐫_k + (𝐈 - 𝐀_k^†𝐀_k) 𝐱̃_k = 𝐀_k^†𝐀_k 𝐱_k + (𝐈 - 𝐀_k^†𝐀_k) 𝐱̃_k + 𝐀_k^†𝐧_k = _k + (𝐈 - 𝐀_k^†𝐀_k) (𝐱̃_k - 𝐱_k) +𝐀_k^†𝐧_k , where 𝐱̃_k is a generated signal at the receiver, i.e., user k, which is resposible to provide the missing part of the signal, (𝐈 - 𝐀_k^†𝐀_k) 𝐱_k. Of course, as the ratio N/M decreases, it is expected that the estimation error may increase, despite the potential for achieving higher spectral efficiency. Yet, it may still be possible to recover the signal reasonably well in terms of semantic-related metrics, even with a significant loss. § SOLVING MULTI-USER SEMANTIC COMMUNICATION WITH DIFFUSION MODELS In this section, we propose the generative semantic communication system to support multiple users, building upon the problem formulation introduced in Section <ref>. A simplified example of the system is also shown in Fig. <ref>. We apply a diffusion model to recover missing portions of the signal. Specifically, we utilize an OFDMA system for downlink transmissions, where each user is allocated a smaller number of subcarriers (N) to transmit a signal of length M, where M > N. Consequently, certain parts of each user's signal are missing, which the proposed null-space diffusion sampling at the receivers aims to fill in. In Sec. <ref>, we first describe how the K-user communication problem is recast as a single-user null-space diffusion sampling process. This is viable as interference from other users can be mitigated, as demonstrated in eq. (<ref>), thanks to orthogonal channel allocations. Then, in Sec. <ref>, we will omit the user index k. §.§ Denoising Diffusion Probabilistic Model The core structure of diffusion models <cit.> is a Markov chain that goes from time 0 to time T in the forward process, denoted by q, and from the time step T to 0 in the reverse process, denoted by p. On one hand, the forward direction starts from an image and progressively adds white Gaussian noise to destroy all the information in the image at time T. On the other hand, the reverse process slowly builds the desired data 𝐱_0 from the original noise sample 𝐱_T. The transition probability in the forward process is normally-distributed as q(𝐱_t | 𝐱_t-1) = 𝒩(√(1-β_t)𝐱_t-1+√(β_t); √(1-β_t)𝐱_t-1, β_t 𝐈), where ϵ∼𝒩(0, 𝐈), 𝐱_t is the intermediate noisy image at time t, and β_t is the pre-defined variance schedule. By reparameterizing with α_t = 1-β_t and α̅_t = ∏_i=0^t α_i, the forward process formulation becomes: q(𝐱_t | 𝐱_0) = 𝒩(𝐱_t; √(α̅_t)𝐱_0, (1-α̅_t) 𝐈). Instead, the posterior distribution of the transition probabilities of the reverse process is derived from the forward process equations by applying the Bayes theorem as: p(𝐱_t-1|𝐱, 𝐱_0) = q(𝐱_t|𝐱_t-1) q(𝐱_t-1|𝐱_0)/q(𝐱_t|𝐱_0) = 𝒩(𝐱_t-1; _t(𝐱_t, 𝐱_0), σ^2_t 𝐈), where the mean and the variance have the following forms: _t(𝐱_t, 𝐱_0) = 1/√(α_t)(𝐱_t - ϵ1-α_t/√(1-α̅_t)) σ^2_t = 1-α̅_t-1/1-α̅_tβ_t. Under this construction, diffusion models involve a denoising neural network 𝒵_θ in the forward process that is trained to predict the noise at the given time step. More precisely, the noise ∼𝒩(0, 𝐈) is applied at a randomly-sampled time step t to the original image 𝐱_0, which will be the input to the network 𝒵_θ to update its parameters θ. Once the training is finished, the denoising network can be exploited to progressively denoise the noise sample 𝐱_T up to generate the new image 𝐱_0. Therefore, the whole model is trained to let the network match the noise ϵ by minimizing the following loss function: ℒ = ϵ - 𝒵_θ(√(α̅_t)𝐱_0 + ϵ√(1- α̅_t), t) _2^2. §.§ Multi-User Communication via Inverse Generation In multi-user OFMDA communication with K users and a total of L=KN subcarriers, suppose that user k downloads a length M signal 𝐱_k using only N<M subcarriers. Here, we consider that N subcarriers are selected based on the N highest gains in 𝐇_k while ensuring the orthogonal allocation constraint in eq. (<ref>). In this case, the received signal 𝐫_k=𝐇_𝐤𝐁_k 𝐱_k+ 𝐧_k at user k results from the original signal 𝐱_k being perturbed by: linear scaling 𝐇_k and masking 𝐁_k (i.e., 𝐀_k) as well as additive noise 𝐧_k. Therefore, reconstructing 𝐱_k from 𝐫_k boils down to the problems of linear inverse scaling, inpainting, and denoising, respectively. These three problems can be combined as an inverse generation problem using a diffusion model. The standard diffusion models are compatible with real values, whereas 𝐱_k∈ℂ^M and its subsequent parameters are complex-valued, as defined in Sec. <ref>. To address this mismatch, we hereafter consider that 𝐱_k = [𝐱_k,R^T, 𝐱_k,I^T]^T∈ℝ^M where 𝐱_k,R∈ℝ^M/2 and 𝐱_k,I∈ℝ^M/2 are the real and imaginary parts of a complex signal in ℂ^M/2. User k, equipped with a diffusion model, aims to reconstruct 𝐱̂_k ≈𝐱_k from 𝐫_k. To this end, the diffusion model at user k generates 𝐱̂_k through T diffusion sampling steps. At each t-th sampling step, the denoising parameters are optimized for given 𝐀_k and 𝐧_k, and the generated sample is combined with the 𝐫_k while satisfying its desired range and null space characteristics. This process can be implemented by modifying the null-space diffusion sampling technique <cit.>. The aforementioned solution can be applied to any user k under OFDMA owing to orthogonal channel allocations. Therefore, we henceforth focus only on a single user's null-space diffusion sampling process while omitting the user index k, as we shall elaborate in Sec. <ref>. §.§ Null-Space Diffusion Sampling for Inverse Generation From the null-space decomposition <cit.>, we know that, for a matrix 𝐀^n × n: Definition 1The range space of 𝐀, R(𝐀), is the subspace spanned by the column of 𝐀. The null space of 𝐀, N(𝐀), is the solution space of the linear system 𝐀𝐱=0. From the Dimension Theorem, we also know that dim R(𝐀) + dim N(𝐀) = n, and that if 𝐀 is non-singular, then N(𝐀) = 0 and R(𝐀) ∩ N(𝐀) = 0. If 𝐀 is singular, then it may be possible that N(𝐀) = 0 and R(𝐀) ∩ N(𝐀) ≠ 0 but it can be proven that the null-space decomposition holds the same. The operator 𝐀^†𝐀 projects 𝐱 in the range space of 𝐀 because 𝐀𝐀^†𝐀𝐱 = 𝐀𝐱 = 𝐫. Instead, the operator (𝐈-𝐀^†𝐀) projects 𝐱 in the null space of 𝐀 because 𝐀(𝐈 - 𝐀^†𝐀)𝐱 = 0. Following this formulation, any sample 𝐱 can be decomposed in the range space and the null space as: 𝐱 = 𝐀^†𝐀𝐱 + (𝐈-𝐀^†𝐀) 𝐱 Considering a generic inverse problem 𝐫 = 𝐀𝐱, the solution to this problem 𝐱̂ must satisfies two constraints: Consistency: 𝐀𝐱̂=𝐫 Realness: 𝐱̂∼ q(𝐱). The solution 𝐱̂ that satisfies the consistency constraint is: 𝐱̂ = 𝐀^†𝐫 + (𝐈- 𝐀^†𝐀) 𝐱̃, which resembles the formulation of the multi-user semantic communication problem in eq. (<ref>), except for the noisy element that we will introduce later in this Section. However, while the formulation in eq. (<ref>) satisfies the consistency constraint, the element that instead controls the realness constraint is 𝐱̃. Therefore, the objective of the diffusion model training is generating the proper null space (𝐈 - 𝐀^†𝐀 )𝐱̃ for the range space 𝐀^†𝐫 such that the realness constraint is satisfied, that is 𝐱̂∼ q(𝐱). Although the above formulation seems straightforward, additional considerations should be evaluated when designing null-space diffusion models <cit.>. Indeed, as stated in Section <ref>, when sampling 𝐱_t-1 from p(𝐱_t-1|𝐱_t, 𝐱_0) in the reverse process where 𝐱_0denotes the original image, the intermediate states 𝐱_t is perturbed by Gaussian noise, making the null-space decomposition unfeasible. Therefore, we have to introduce a reparameterization to obtain noise-free intermediate states and apply the null-space decomposition. Precisely, we reparameterize the mean _t(𝐱_t, 𝐱_0) and the variance σ_t^2 of the reverse process transition distribution p(𝐱_t-1|𝐱_t, 𝐱_0) as: _t(𝐱_t, 𝐱_0) = √(α̅_t-1)β_t/1-α̅_t𝐱_0 + √(α_t)(1-α̅_t-1)/1-α̅_t𝐱_t σ^2_t = 1-α̅_t-1/1-α̅_tβ_t. Although the noise-free image 𝐱_0 is still unknown in intermediate states, we can obtain an estimate 𝐱_0|t of it from 𝐱_t by reversing eq. (<ref>) and predict the noise with the denoising model ϵ_t = 𝒵_θ(𝐱_t, t), i.e.: 𝐱_0|t = 1/√(α̅_t)(𝐱_t - 𝒵_θ(𝐱_t, t)). To obtain the final estimate of the noise-free intermediate information, we can fix the range space, yielding: 𝐱̂_0|t = 𝐀^†𝐫 + (𝐈 - 𝐀^†𝐀) 𝐱_0|t = 𝐱_0|t - 𝐀^† (𝐀𝐱_0|t- 𝐀𝐱). We employ the fresh estimation of eq. (<ref>) in the reparameterization of the mean and of the variance in eq. (<ref>), thereby yielding the intermediate sample 𝐱_t-1 as: 𝐱_t-1 = √(α̅_t-1)β_t/1-α̅_t𝐱̂_0|t + √(α_t)(1-α̅_t-1 )/1-α̅_t𝐱_t + σ_t ϵ, with ϵ∼𝒩(0, 𝐈). Thus far, we have formulated the solution considering the problem without any corruption from noise that can instead occur in the transmission over the channel. Now, we extend the approach to more generic noisy problems of the form 𝐫=𝐀𝐱+𝐧 with 𝐧∼𝒩(0, σ_r^2𝐈). When applying the operator 𝐀 to these problems, a further noisy term 𝐀^†𝐧 is introduced in eq. (<ref>) as: 𝐱̂_0|t = 𝐀^†𝐫 + (𝐈 - 𝐀^†𝐀) 𝐱_0|t = 𝐱_0|t - 𝐀^† (𝐀𝐱_0|t- 𝐀𝐱) + 𝐀^†𝐧, leading to distorted or noisy final samples. To make the process aware of the additional noisy term and model the distortion brought from it, we can introduce the parameters Σ_t and Φ_t as: 𝐱̂_0|t = 𝐱_0|t - Σ_t 𝐀^† (𝐀𝐱_0|t- 𝐫), p̂(𝐱_t-1|𝐱_t, 𝐱̂_0|t) = 𝒩(_t (𝐱_t-1|𝐱_t, 𝐱̂_0|t), Φ_t𝐈). The two parameters Σ_t and Φ_t play a crucial role in scaling the range space correction and the noise σ_tϵ in p̂(𝐱_t-1|𝐱_t, 𝐱̂_0|t), respectively. However, to ensure the correction properly works, such parameters have to satisfy two constraints: * To maximize the consistency constraint via the range space correction, Σ_t should tend to the identity matrix; * To guarantee that the pretrained model 𝒵_θ is able to remove the noise in 𝐱_t-1, Φ_t should ensure that the noise variance in 𝐱_t-1 is equal to the scheduled one σ_t^2. By approximating the additive noise in eq. (<ref>) with 𝐀^†𝐧∼𝒩(0, σ_r^2 𝐈), it is possible to simplify the two parameters to be Σ_t = λ_t𝐈 and Φ_t=γ_t𝐈<cit.>. Consequently, we can update the intermediate state 𝐱_t-1 in eq. (<ref>) with the constraint satisfaction by setting: γ_t = σ_t^2 - ( √(α̅_t-1)β_t/1-α̅_tλ_t σ_t )^2, λ_t = 1, σ_t ≥√(α̅_t-1)β_t/1-α̅_tσ_𝐫 σ_t/ σ_𝐫, σ_t < √(α̅_t-1)β_t/1-α̅_tσ_𝐫 for the first constraint, and: (√(α̅_t-1)β_t/1-α̅_tλ_t σ_𝐫)^2 +γ_t = σ_t^2 for the second constraint. The optimal value of the denoising parameter can be directly estimated from the received signal as <cit.>: σ_𝐫^* = (max(𝐫) - min(𝐫)) ·σ_𝐫. § EXPERIMENTAL EVALUATION To thoroughly assess the applicability and the performance of the proposed framework, we conduct several experiments with different scenarios, datasets, and evaluations. §.§ Scenario We consider an OFDMA multi-user scenario for experimental evaluation. As mentioned earlier, N subcarriers can be allocated to each user's downlink transmission among a total of L = K N subcarriers. As a result, it is possible to choose N subcarriers with sufficiently high channel gains for each user, as each user has different channel conditions from the others. In other words, a multi-user diversity gain <cit.> can be exploited to assign a subset of subcarriers to each user. Furthermore, as the base station knows the channel coefficients, it is possible to perform power control to equalize the effective channel gains. Consequently, it is sufficient to consider the Additive White Gaussian Noise (AWGN) channel model for each user. In addition, for signal modulation per subcarrier, 16 Quadrature Amplitude Modulation (QAM) is employed with SNRs in the range of {-10, -5, 0, 5, 10}. The signal is then demodulated at the receivers with 16QAM as well. §.§ Comparison methods We compare the proposed method with three well-known approaches. The first approach is based on LDPC in the IEEE 802.11 WiFi standard as a channel code. Here, we do not pay attention to the source compression, therefore LDPC is directly applied to the modulated source image. Going further, since our method is learning-based, we compare it with a Deep learning-based Joint Source-Channel Coding (DeepJSCC) in two different versions. This model comprises a neural encoder at the sender side and a neural decoder at the receiver side. Usually, these two models are trained in an end-to-end fashion. Within the family of DeepJSCC models, we select a promising method that extends these frameworks to an OFDM-based system, which exploits single-tap frequency domain equalization to mitigate the multipath fading channel <cit.>. Two variants of the DeepJSCC-OFDM method have been proposed. The implicit one (DeepJSCC-OFDM-I) concatenates the frequency domain pilots and the data symbols together and directly feeds them into the decoder model. Instead, in the explicit approach (DeepJSCC-OFDM-E), the authors introduce two residual light-weight neural networks that they call subnets to learn residual errors in the channel estimation and equalization. §.§ Datasets We conduct the evaluation on two different datasets. First, we consider the CelebA-HQ dataset, which comprises 30k high-quality images of human faces at resolution 256 × 256 and it is widely adopted to test generative models and communication systems. We use this dataset as the benchmark to conduct comparisons with our method and the baselines since some of them were originally trained in the same dataset. However, we aim at scaling up the possibilities of the proposed approach and we test the performance of out method in large-scale datasets and with dataset-free images. Indeed, we also consider a diffusion model pretrained on ImageNet, which comprises 1 million labeled images collected from flickr, depicting 1000 object categories. Among these samples, a large set of the classes in this dataset are animals, plants, or other nature-related objects. Moreover, many images contain humans. Therefore, it is quite challenging to reconstruct or generate other kinds of objects, such as urban scenes or buildings. Following <cit.>, the images are reshaped to 128 × 128 for training, while on the testing stage, we reshape the images to 256 × 256. While DeepJSCC approaches fail to train on this dataset due to data complexity, the diffusion model successfully completes the training, allowing us to apply the proposed null-space sampling algorithm on it and to evaluate the performance on the ImageNet test set. It is notable to note that our sampling scheme can be applied to any pretrained diffusion model. In addition, with the model trained on ImageNet, we perform zero-shot image inpainting on dataset-free images, which are images collected directly from our camera and that are not contained in any public datasets. To test the generalization ability of the proposed model, we select two extremely challenging images, full of details and containing under-represented classes, that may be urban scenes or buildings. The two selected images are reported in the first column of Fig. <ref>. §.§ Metrics We evaluate the performance of the proposed method under four distinct metrics. As a first evaluation, we involve the common Structural Similarity Index (SSIM), the higher the better, and the Peak Signal-to-Noise Ratio (PSNR), the higher the better as well. However, these metrics assess the performance pixel-by-pixel or bit-per-bit without effectively evaluating whether the semantics of the content have been preserved at the receiver. Therefore, while both SSIM and PSNR are suitable for evaluating conventional communication systems, they are being progressively discarded in the evaluation of semantic communication systems <cit.>. A more appropriate metric to evaluate the semantic transmission of images is the Learned Perceptual Similarity (LPIPS) <cit.>, which has been demonstrated to be much more aligned with human perception with respect to SSIM and PSNR <cit.>. The LPIPS metric comprises complex and fine-tuned steps with the aim of evaluating the distance of features in different neural networks. At a low level, let us consider a simple convolutional layer and two generic images in ℝ^C× H × W, with C being the channel dimension, and H and W the height and width. The LPIPS is computed by evaluating the cosine distance between the two images (in the channel dimension, e.g., the RGB channels for colored images) and averaging across spatial dimensions H and W, as well as on the layers of the network. The lower the LPIPS value, the more perceptually similar are the two samples. In our work, as the baseline network to extract features, we consider the conventional VGG model. Finally, we consider a further metric that is currently the most widely adopted for evaluating generative models performance in the image domain. The Fréchet Inception Distance (FID) relies on the Fréchet distance between the features extracted from the InceptionV3 network of the original and of the regenerated images. While SSIM measures image degradation through structural information, FID estimates how the distributions of original and regenerated images are far from each other. Therefore, lower FID values correspond to more plausibly regenerated samples. Formally, considering the two sets of features coming from the real (r) and the generated (g) sets normally distributed, the FID assumes the following form: FID = μ_r - μ_g ^2 + Tr (Σ_r + Σ_g - 2 (Σ_r Σ_g )^1/2), whereby, μ represents the mean and Σ the covariance matrix of the two Gaussian distributions. §.§ Neural Model We consider two backbone models for the proposed null-space diffusion model framework. Importantly, the proposed null-space sampling can be adopted with any pretrained diffusion model, as it is training-independent. To this end, we do not retrain any model and we select two pretrained models that can fit our scenario. For the CelebA-HQ dataset, we employ the pretrained model of <cit.>, while for ImageNet we involve the pretrained model from <cit.>. For both architectures, the backbone model is a residual U-Net, with global attention layers at three different resolutions, that are 32 × 32, 16 × 16, and 8 × 8 with 64 channels per each of the 4 heads. The two residual blocks for upsampling and downsampling the activations are inspired from <cit.>, as suggested in <cit.>. Similarly, the residual connections are scaled by 1 / √(2). Moreover, adaptive group normalization is used for injecting forward and reverse process timesteps into residual blocks. For training, the number of diffusion steps T is set equal to 1000, with a linear noise schedule. §.§ Results To provide an exhaustive evaluation, we report both objective and subjective evaluations of the experiments. We conduct experiments to validate the ability of the proposed models to regenerate the data lost during transmission due to N < M subcarriers, then the capabilities of restoring samples from channel noise, and finally we consider both the scenario together. Fill the gaps. Firstly, we evaluate the performance of the proposed method in regenerating the missing portions of data lost due to the constraint N < M in the number of subcarriers as stated in Sec. <ref>. Figure <ref> displays the value of SSIM, PSNR, FID, and LPIPS under the different values of N, say 256, 205, 180, and 150, corresponding to a fraction N/M equal to {1, 0.8, 0.7, 0.6} on the CelebA-HQ dataset. The LDPC method, in the blue line, is unable to face the missing data case, and its performance drastically sinks as the N/M ratio decreases. On the contrary, the proposed method, in the red line, preserves high performance across each situation, clearly outperforming all other comparisons. Indeed, our method barely shows any loss in LPIPS and FID scores from 0.2 loss to 0.6 subcarriers lost, due to its potential to inpaint missing portions of images. This result proves the capability of our method as a crucial component of future multi-user OFDMA systems, in which the amount of information to transmit may be strictly related to the channel state and to the ability of the null-space diffusion model to regenerate the missing portions of data. Moving to a subjective evaluation, Figure <ref> shows random samples from the ImageNet test set with received and infilled images for a scenario with two users and N/M in {0.8, 0.7, 0.6}, corresponding again to N={ 205, 180, 150}. Despite several chunks of lost information, the proposed method excellently fills the gaps to be semantically consistent with the whole picture and with the neighbors of the gaps. Even when the gaps almost completely cover some objects in the picture (i.e., the lighthouse in the last picture, or the arch decoration in the middle one), the proposed method inpaints the missing information in order to be extremely consistent with the neighbor content. Moving forward, additional results that shows the ability of our method are shown in Fig. <ref>. Here, it is notable that the tested images are dataset-free and are not contained in any public online dataset, as they are pictures directly taken by the authors of this work. However, even in this very challenging scenario, the proposed method is able to restore the image and fill the gaps with semantic-consistent content. Denoise. Together with the ability to inpaint image gaps, we also test the robustness of the proposed method under different noise channel SNRs. We consider extremely strong conditions and evaluate the performance with SNR values within {-10, -5, 0, 5, 10}. As Figure <ref> highlights, the proposed method far exceeds any other comparison whose performance drastically drops the lower the SNR becomes. For each metric we consider, our model preserves the performance by incomparably denoising and regenerating corrupted samples at the receiver. Indeed, thanks to the iterative denoising of the sampling procedure as explained in Subsection <ref>, the null-space diffusion model can effectively denoise the received images even in the case of extremely low SNR values. This result is also strengthened by the visual comparison we report in Fig. <ref>, in which a random test sample from the CelebA-HQ dataset is transmitted with the challenging SNR equal to 0 and -5. The received image is highly corrupted by the AWGN channel noise, and most of the existing approaches fail to recover the ground truth (original) transmitted sample. On the contrary, our method excellently restores the corrupted sample producing a visually-pleasant regenerated image. Altogether: denoise & fill the gaps. Lastly, we test the capabilities of the proposed method to both denoise and fill the missing data portions altogether. We select two N/M ratios, which are 0.7 and 0.6, and for each one with simulated experiments with extremely low SNR values, equally spaced from -10 to 10. Figure <ref> shows the results for each scenario and the three comparison methods against our approach. From the curves in Fig. <ref>, it is once more clear that the proposed method is robust against both aggressive channel noise corruptions and low numbers of subcarriers with N strictly lower than M. Indeed, according to the four objective metrics (SSIM, PSNR, FID, and LPIPS), our method keeps high performance across the different scenarios, while other methods progressively fail. In conclusion, the experimental evaluation undoubtedly proves the efficacy of the proposed null-space diffusion model as a novel generative-based framework for multi-user semantic communication. Thanks to the null-space sampling procedure, the method can effectively solve communication inverse problems and fill the gaps due to the multi-user scenario in received information, as well as denoise the content from extremely aggressive channel noise. §.§ How low can the N/M ratio be for our model? We test the proposed method in extreme conditions, including all the possible N/M ratios up to 0.1 to understand the furthest capabilities of our approach. Figure <ref> shows the SSIM curve and the respective regenerated images for N/M ratios equally spaced from 0.7 to 0.1. From the curve and the visual analysis, it is clear how our method still regenerates plausible samples up to N/M = 0.4, which means only 102 subcarriers over 256, with an SSIM equal to 0.8. Moreover, the performance is acceptable also up to an N/M ratio of 0.2, in which the system is using only 51 subcarriers among the total 256, obtaining an SSIM of 0.66 and recognizable regenerated image. The performance consistently drops only at the ultimate ratio of 0.1 (26 subcarriers). In conclusion, this experiment shows how the proposed method is exceptionally robust to extremely low number of employed subcarriers, proving once more the capability of our model to be involved in multi-user scenarios. § CONCLUSIONS AND FUTURE WORKS In this paper, we introduced and formulated a novel generative model-based semantic communication framework for multi-user scenarios. The proposed null-space sampling technique regenerates each user's missing signals, which have been deliberately not sent to save radio resources for other users. It was also shown that the proposed technique can concurrently correct noisy channel perturbations. Across a wide range of SNRs, including extremely low SNR regimes, our proposed diffusion model-based framework has consistently achieved higher perceptual similarities than classical LDPC and autoencoder-based semantic communication frameworks such as DeepJSCC. Extending this preliminary study, where full channel knowledge at the transmitter is assumed, to incorporate the impact of channel estimation could be an interesting topic for future research. Another interesting direction could be applying diffusion sampling acceleration techniques to reduce computing latency and complexity at the receiver. ieeetr
http://arxiv.org/abs/2405.09882v1
20240516080536
DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection
[ "Yuhao Sun", "Lingyun Yu", "Hongtao Xie", "Jiaming Li", "Yongdong Zhang" ]
cs.CV
[ "cs.CV", "cs.AI" ]
The time fractional order derivative for multi-class AR model [ Received: date / Accepted: date ============================================================= With the rapid development of face recognition (FR) systems, the privacy of face images on social media is facing severe challenges due to the abuse of unauthorized FR systems. Some studies utilize adversarial attack techniques to defend against malicious FR systems by generating adversarial examples. However, the generated adversarial examples, i.e., the protected face images, tend to suffer from subpar visual quality and low transferability. In this paper, we propose a novel face protection approach, dubbed DiffAM, which leverages the powerful generative ability of diffusion models to generate high-quality protected face images with adversarial makeup transferred from reference images. To be specific, we first introduce a makeup removal module to generate non-makeup images utilizing a fine-tuned diffusion model with guidance of textual prompts in CLIP space. As the inverse process of makeup transfer, makeup removal can make it easier to establish the deterministic relationship between makeup domain and non-makeup domain regardless of elaborate text prompts. Then, with this relationship, a CLIP-based makeup loss along with an ensemble attack strategy is introduced to jointly guide the direction of adversarial makeup domain, achieving the generation of protected face images with natural-looking makeup and high black-box transferability. Extensive experiments demonstrate that DiffAM achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting compared with the state of the arts. The code will be available at https://github.com/HansSunY/DiffAMhttps://github.com/HansSunY/DiffAM. § INTRODUCTION Recent years have witnessed major advances in face recognition (FR) systems based on deep neural networks (DNNs), which have been applied to various scenarios. However, the expanding capabilities of FR systems have raised concerns about the threats they pose to facial privacy. Particularly, FR systems have the potential for unauthorized surveillance and monitoring, which can analyze social media profiles without consent with the widespread availability of face images on social media<cit.>. Therefore, it is crucial to find an effective approach to protect facial privacy against unauthorized FR systems. Many studies leverage adversarial attacks<cit.> for facial privacy protection, which generate noise-based<cit.> or patch-based<cit.> perturbations on face images. Nevertheless, to achieve ideal attack effects, most of the adversarial perturbations generated by these methods are noticeable and cluttered. Consequently, the protected face images tend to suffer from poor visual quality. To gain more natural-looking adversarial examples, makeup-based methods<cit.> are attracting considerable attention. These methods organize perturbations as makeup, which can generate protected face images with adversarial makeup. However, such makeup-based methods have the following problems: (1) Subpar visual quality. Most of the existing works generate makeup with generative adversarial networks<cit.> (GANs). The protected face images generated by these GAN-based approaches often have unexpected makeup artifacts and struggle to preserve attributes unrelated to makeup, such as background and hair, leading to poor visual quality. (2) Weakness in fine-grained makeup generation. The fine-grained information of generated makeup, like the position, color shade, range and luminosity, may not align consistently with expected makeup, especially for text-guided makeup generation method as shown in <ref>. (3) Low black-box transferability. Attack effects highly rely on robust makeup generation<cit.>. Due to the limitation of makeup generation quality proposed above, these methods suffer from low attack success rates under the black-box setting including commercial APIs. In summary, it is still challenging to simultaneously achieve satisfying makeup generation and good attack effects in black-box scenarios. Diffusion models<cit.> have shown better performance than GANs in image generation tasks thanks to more stable training process and better coverage of image distribution<cit.>. Recent works explore to guide diffusion models in CLIP<cit.> space with textual prompts<cit.>, demonstrating promising results. Thus, it is encouraging to utilize diffusion models to generate protected face images with both high visual quality and transferability. However, for more refined tasks like makeup transfer, textual prompts are too coarse-grained for guidance as illustrated in <ref>. So it is worth considering a more fine-grained way of direction guidance in CLIP space for diffusion models. To address the above problems, we observe that although it is hard to control the refined generation of reference makeup directly with textual prompts, the makeup of reference face image can be easily removed by a fine-tuned diffusion model with guidance of textual prompts in CLIP space<cit.>. Through this inverse process of makeup generation, domains of makeup and non-makeup can be definitely connected. Following this line of thought, we propose DiffAM, a novel diffusion-based adversarial makeup transfer framework to protect facial privacy. The overall pipeline of DiffAM is shown in <ref>. DiffAM aims to generate protected face images with adversarial makeup style transferred from a given reference image. It is designed as two modules, a text-guided makeup removal module and an image-guided makeup transfer module. In the text-guided makeup removal module, we aim to remove the makeup of reference images, gaining the corresponding non-makeup reference images. This deterministic process simplifies the exploration of the makeup and non-makeup domains' relationship. Notably, the difference between the latent codes of makeup and non-makeup images of reference image in CLIP space indicates the accurate direction from non-makeup domain to makeup domain, providing alignment information for fine-grained makeup transfer. In the image-guided adversarial makeup transfer module, a CLIP-based makeup loss is proposed, combined with an ensemble attack strategy to control the precise generation direction and distance to adversarial makeup domain. In this way, high-quality makeup with strong transferability can be generated with fine-grained cross-domain guidance in CLIP space with diffusion models. Extensive experiments on the CelebA-HQ<cit.> and LADN<cit.> datasets demonstrate the effectiveness of our method in protecting facial privacy against black-box FR models with a gain of 12.98%, while achieving outstanding visual quality. In summary, our main contributions are: * A novel diffusion-based adversarial makeup transfer method, called DiffAM, is proposed for facial privacy protection, intending to craft adversarial faces with high visual quality and black-box transferability. * A text-guided makeup removal module is designed to establish the deterministic relationship between non-makeup and reference makeup domains, offering precise cross-domain alignment guidance for makeup transfer. * A CLIP-based makeup loss is proposed for refined makeup generation. It consists of a makeup direction loss and a pixel-level makeup loss, which jointly control the direction and distance of makeup generation. § RELATED WORKS §.§ Adversarial Attacks on Face Recognition Due to the vulnerability of DNNs to adversarial examples<cit.>, many methods have been proposed to attack DNN-based face recognition (FR) systems. According to the knowledge about the target FR model, the attacks can be categorized into two main types, white-box attacks<cit.> and black-box attacks<cit.>. In white-box attacks, the attacker requires complete information about the target models. However, it is hard to get full access to unauthorized FR systems in real-world scenarios. So black-box attacks, without the limitation of knowledge about the target models, are more suitable in such scenarios. Noise-based methods<cit.>, a common form of black-box attack, can generate transferable adversarial perturbations on face images. But due to the ℓ_∞ constraint of noise, the attack strength cannot be guaranteed. For better attack effect, patch-based methods<cit.> add abrupt adversarial patches to the limited region of face images. Although these methods attain a measure of privacy protection, the visual quality of the resulting protected face images is often compromised and suffers from weak transferability. Recent works attempt to protect face images with adversarial makeup<cit.>, which is an ideal solution for balancing visual quality and transferability. These methods hide the adversarial information in the generated makeup style, which can fool FR systems in an imperceptive way. However, existing makeup-based methods tend to suffer from poor visual quality and low transferability. And the attributes unrelated to makeup are hard to be completely preserved. Therefore, in this work, we propose a novel face protection approach DiffAM to improve the quality and black-box transferability of adversarial makeup through fine-grained guidance in CLIP space. §.§ Makeup Transfer Makeup transfer<cit.> aims to transfer makeup styles from the reference faces to the source faces while preserving the original face identity. As a typical image-to-image translation task, many approaches employ generative adversarial networks (GANs) for makeup transfer. BeautyGAN<cit.> first introduces a dual input/output GAN to achieve makeup transfer and removal simultaneously. Moreover, it proposes a pixel-wise histogram matching loss as guidance for makeup transfer in different face regions which has been subsequently adopted by many methods. LADN<cit.> adopts multiple overlapping local discriminators and asymmetric losses for heavy facial makeup transfer. Besides the above GAN-based methods, BeautyGlow<cit.> uses the Glow framework for makeup transfer by decomposing the latent vector into non-makeup and makeup parts. Taking advantage of the good visual properties of makeup transfer, we apply the concept of makeup style to facial privacy protection. The proposed DiffAM organizes the distribution of adversarial information semantically into adversarial makeup, which can minimize the impact on the visual quality of the protected face images while ensuring the effectiveness of attacks on the FR system. §.§ Diffusion model and Style Transfer Diffusion models<cit.> are a class of probabilistic generative models, which have impressive performance in generating high-quality images. They have been applied to various tasks, such as image generation<cit.>, image editing<cit.>, image super-resolution<cit.> and style transfer<cit.>. Style transfer is an image-to-image translation<cit.> task that combines the content of source image and the style of reference image. Existing diffusion-based methods leverage the alignment between text and images in CLIP space for text-driven style transfer<cit.>. As a subtask of style transfer, makeup transfer can also be guided with text. However, the control of text is too rough for more refined makeup transfer in comparison to global style transfer. Given a reference makeup image, simply using text is insufficient to generate precise makeup, such as the intensity, shape, and position. Considering the limitation of text, we point that makeup removal, the inverse process of makeup transfer, can provide deterministic guidance from non-makeup domain to makeup domain. Moreover, a CLIP-based makeup loss is introduced for image-driven makeup transfer. § METHOD §.§ Problem Formulation Black-box attacks on face recognition (FR) systems can be further divided into targeted attacks (i.e., impersonation attacks) and non-targeted attacks (i.e., dodging attacks). For more efficient protection of face images, we focus on targeted attack which aims to mislead FR systems to recognize the protected faces as the specified target identity. The targeted attack can be defined as an optimization problem: min_x^' L_adv=𝒟(m(x^'),m(x^*)), where x^' is the protected face image, x^* is the target face image, m represents the feature extractor of FR models, 𝒟(.) represents a distance metric. Particularly, as for adversarial makeup transfer, the protected face image x^' is obtained by transferring makeup style from the reference image y to the clean face image x, which can be formulated as: x^'=𝒢(x,y), where 𝒢 is an adversarial makeup transfer network. §.§ DiffAM To generate natural-looking and transferable adversarial makeup against FR models, DiffAM aims to explore precise and fine-grained guidance of generation from non-makeup domain to adversarial makeup domain, which is the overlap domain between reference makeup domain and adversarial domain, as shown in <ref>(a). To achieve this, DiffAM consists of two stages: text-guided makeup removal and image-guided adversarial makeup transfer, as illustrated in <ref>. The details of each component are described as follows. §.§.§ Text-guided Makeup Removal Adopting makeup transfer against FR models, an intuitive idea is to directly use text pairs to guide the generation of makeup. However, the coarse-grained text is hard to build the precise relationship between non-makeup domain and reference makeup domain. Concretely, the details of reference makeup, such as color depth, shape, etc., are difficult to control in text, resulting in undesired makeup generation. To eliminate ambiguity caused by textual guidance, we innovatively design the text-guided makeup removal module to remove the makeup style of reference image y and obtain the corresponding non-makeup image ŷ with the guidance of text pair in CLIP space. The pair of reference images with and without makeup can connect makeup domain and non-makeup domain in CLIP space, as illustrated in <ref>(b), providing the deterministic cross-domain guidance for subsequent stage of adversarial makeup transfer. Given the reference image y, we first convert it to the latent y_t_0 by forward diffusion process with a pre-trained diffusion model ϵ_θ_1. In the reverse diffusion process, the diffusion model ϵ_θ_1 is fine-tuned for makeup removal to obtain the non-makeup image ŷ, which is guided by directional CLIP loss ℒ_MR <cit.>: ℒ_MR=1-Δ I_y·Δ T/‖Δ I_y‖‖Δ T‖, where Δ I_y=E_I(ŷ(θ̂_1))-E_I(y) and Δ T=E_T(t_clean)-E_T(t_makeup). Here, E_I and E_T are the image and text encoders of CLIP model<cit.>, ŷ(θ̂_1) is the sampled image from y_t_0 with the optimized parameter θ̂_1, t_clean and t_makeup are the text descriptions for non-makeup and makeup domains, which can be simply set as "face without makeup" and "face with makeup". To preserve identity information and image quality, we introduce the face identity loss ℒ_id(ŷ,y) <cit.> and perceptual loss ℒ_LPIPS(ŷ,y) <cit.>. As for total loss ℒ_total, we have ℒ_total=λ_MRℒ_MR+λ_idℒ_id+λ_LPIPSℒ_LPIPS, where λ_MR, λ_id and λ_LPIPS are weight parameters. It is worth noting that we use deterministic DDIM sampling and DDIM inversion<cit.> as the reverse diffusion process and forward diffusion process. The reconstruction capability of deterministic DDIM inversion and sampling ensures the effects of makeup removal. §.§.§ Image-guided Adversarial Makeup Transfer To protect source image x against FR models, the image-guided adversarial makeup transfer module aims to generate protected face image x^' with adversarial makeup transferred from reference image y. The protected face image x^' misleads FR models into recognizing it as the target identity x^*, as shown in <ref>. After getting y and ŷ, a CLIP-based makeup loss ℒ_MT coordinating with an ensemble attack strategy is introduced to align the direction between x and x^' with the direction between non-makeup domain and adversarial makeup domain. The fine-grained cross-domain alignment ensures the quality and black-box transferability of adversarial makeup style. Given the source image x, we first get the latent x_t_0 through deterministic DDIM inversion with another pre-trained diffusion model ϵ_θ_2. Then, the diffusion model ϵ_θ_2 is fine-tuned to generate protected face image x^' with guidance of CLIP-based makeup loss ℒ_MT and ensemble attack loss L_adv. We also incorporate a makeup-irrelevant information preservation operation for better visual quality during fine-tuning. The details of the fine-tuning process are presented as follows. CLIP-based Makeup Loss. During the stage of makeup removal, the learned direction from reference makeup domain to non-makeup domain in CLIP space is expressed as Δ I_y = E_I(ŷ)-E_I(y). As shown in <ref>(c), We can just reverse the direction to get the guidance of direction from non-makeup domain to reference makeup domain: Δ I_ref = -Δ I_y = E_I(y)-E_I(ŷ), where E_I is the image encoder of CLIP model. In addition to maintaining consistency with the style removal stage in CLIP space, E_I has powerful image understanding capabilities, which can facilitate better extraction of semantic information of makeup styles, such as shape and the relative position on the face<cit.>. In this way, Δ I_ref can achieve more semantic and holistic supervision than simple pixel-level guidance or text guidance. To align Δ I_x, the direction between x and x^', with Δ I_ref in CLIP space, a makeup direction loss is proposed: ℒ_MT^dir=1-Δ I_x·Δ I_ref/‖Δ I_x‖‖Δ I_ref‖, where Δ I_x = E_I(x^'(θ̂_2)) - E_I(x) and x^'(θ̂_2) is the protected face image generated by fine-tuned diffusion model ϵ_θ̂_2. By aligning image pairs in CLIP space, makeup direction loss controls precise direction for makeup transfer. Besides the guidance of makeup transfer direction, we also need to consider the makeup transfer distance between makeup domain and non-makeup domain, as shown in <ref>(c), which determines the intensity and accurate color of makeup. Therefore, a pixel-level makeup loss L_MT^px<cit.> is employed to constrain makeup transfer distance in pixel space. We conduct histogram matching between generated image x^' and reference image y on three facial regions as guidance for the intensity of makeup. The pixel-level makeup loss is defined as: ℒ_MT^px = ‖ x^'-HM(x^',y)‖, where HM(.) represents the histogram matching. Combining the makeup direction loss and pixel-level makeup loss, the CLIP-based makeup loss is expressed as: ℒ_MT = λ_dirℒ_MT^dir+λ_pxℒ_MT^px. With the joint guidance of ℒ_MT^dir and ℒ_MT^px for makeup transfer direction and distance, the generated makeup image x^' can precisely fall within the reference makeup domain, achieving excellent makeup transfer effects. Ensemble Attack. In addition to guidance in makeup transfer direction, there is also a need for guidance in the adversarial direction to find the final adversarial makeup domain, as shown in <ref>(d). To solve the optimization problem in <ref>, an ensemble attack strategy<cit.> is introduced. We choose K pre-trained FR models with high recognition accuracy as surrogate models for fine-tuning, aiming to find the direction towards a universal adversarial makeup domain. The ensemble attack loss is formulated as: ℒ_adv = 1/K∑^K_k=1[1-cos (m_k(x^'),m_k(x^*))], where m_k represents the k-th pre-trained FR model and we use cosine similarity as the distance metric. The ensemble attack loss ℒ_adv adjusts the generation direction from the makeup domain to the adversarial makeup domain, improving the transferability of adversarial makeup under black-box settings. Preservation of Makeup-Irrelevant Information. To ensure the visual quality of protected face images, it is crucial to minimize the impact on makeup-irrelevant information, such as identity and background, during makeup transfer. However, due to fine-tuning the diffusion model ϵ_θ_2 in the sampling process, the cumulative error between the prediction noise of ϵ_θ_2 and ϵ_θ̂_2 will increase with denoising steps, resulting some unexpected distortion besides makeup style. To address this problem, we leverage the progressive generation property of diffusion models<cit.>, where coarse-grained information (e.g., layout, shape) is focused at early denoising steps while semantic details at later steps. Makeup style is typically generated in the final steps of denoising process as a kind of fine-grained information of face. Thus, we propose to reduce the time step T in DDIM inversion and sampling for retention of most makeup-irrelevant information. This simple but effective operation can greatly improve the visual quality of protected face and accelerate the whole process of makeup transfer. Moreover, the perceptual loss ℒ_LPIPS(x^',x) and ℓ_1 loss are further introduced to explicitly control generation quality and pixel similarity: ℒ_vis = ℒ_LPIPS(x^',x) + λ_ℓ_1‖ x^'-x‖. Total Loss Function. By combining all the above loss functions, we have total loss function as follows: ℒ = λ_MTℒ_MT+λ_advℒ_adv+λ_visℒ_vis, where λ_MT, λ_adv and λ_vis are weight parameters. § EXPERIMENTS §.§ Experimental Settings Datasets. For makeup removal, we randomly sample 200 makeup images from MT dataset<cit.>, which consists of 2719 makeup images and 1115 non-makeup images, for fine-tuning. For adversarial makeup transfer, we randomly sample 200 images from CelebA-HQ dataset<cit.> for fine-tuning. To evaluate the effectiveness of DiffAM, we choose CelebA-HQ and LADN<cit.> as our test sets. For CelebA-HQ, we select a subset of 1000 images and divide them into four groups, each of which has a target identity<cit.>. Similarly, for LADN, we divide the 332 images into four groups for attack on different target identities. Benchmark. We do comparisons with multiple benchmark schemes of adversarial attacks, including PGD<cit.>, MI-FGSM<cit.>, TI-DIM<cit.>, TIP-IM<cit.>, Adv-Makeup<cit.>, AMT-GAN<cit.> and CLIP2Protect<cit.>. PGD, MI-FGSM, TI-DIM and TIP-IM are typical noise-based methods, while Adv-Makeup, AMT-GAN, CLIP2Protect and DiffAM are makeup-based methods that also exploit makeup transfer to generate protected face images. Target Models. We choose four popular public FR models as the attacked models, including IR152<cit.>, IRSE50<cit.>, FaceNet<cit.> and MobileFace<cit.>. Three of them are chosen for ensemble attack during training and the remaining one serves as the black-box model for testing. Meanwhile, we evaluate the performance of DiffAM on commercial FR APIs including Face++[<https://www.faceplusplus.com/face-comparing/>] and Aliyun[<https://vision.aliyun.com/experience/detail? tagName=facebody children=CompareFace>]. Implemention Details. For text-guided makeup removal and image-guided makeup transfer, we use ADM<cit.> pre-trained on Makeup Transfer (MT) dataset<cit.> and CelebA-HQ dataset<cit.> respectively as the generative model. To fine-tune diffusion models, we use an Adam optimizer<cit.> with an initial learning rate of 4e-6. It is increased linearly by 1.2 per 50 iterations. As mentioned in <ref>, we set total time step T = 60 and (S_inv, S_sam) = (20, 6), where S_inv and S_sam represent the discretization steps of DDIM inversion and sampling. The diffusion models are fine-tuned with 6 epochs. All our experiments are conducted on one NVIDIA RTX3090 GPU. Evaluation Metrics. Following <cit.>, we use attack success rate (ASR) to evaluate the effectiveness of privacy protection of different methods. When calculating the ASR, we set False Acceptance Rate (FAR) at 0.01 for each FR model. In addition, we use FID<cit.>, PSNR(dB) and SSIM<cit.> to evaluate the image quality of protected face images. §.§ Comparison Study This section compares the experimental results of DiffAM and benchmark methods in terms of attack performance under black-box settings and image quality. Comparison on black-box attacks. <ref> reports quantitative results of black-box attacks against four popular FR models on CelebA-HQ and LADN datasets. We test the performance of targeted attack against four target identities<cit.>, with the results of DiffAM averaged over 5 reference makeup images from MT-dataset, following <cit.>. To simulate real-world protection scenarios, the target face images used during testing are different images of the same individual compared to the one used during training. The average black-box ASRs of DiffAM are significantly about 28% and 13% higher than SOTA noise-based method TIP-IM and makeup-based method CLIP2Protect. DiffAM also maintains a good attack effectiveness on Facenet, which is difficult to attack using other methods. The results show that DiffAM has strong black-box transferability, which demonstrates the role of DiffAM in accurate guidance to the adversarial makeup domain as we expected. Comparison on image quality. <ref> reports the evaluations of image quality. We choose Adv-makeup, AMT-GAN and CLIP2Protect, three latest makeup-based methods, as benchmarks for comparison. Adv-makeup has the best performance in all quantitative assessments. This is because Adv-makeup only generates eyeshadow compared to the full-face makeup generation of the others. Although Adv-makeup has minimal image modification, the trade-off is a significantly lower attack success rate as shown in <ref>. Compared to AMT-GAN and CLIP2Protect, DiffAM achieves lower FID scores and higher PSNR and SSIM scores, which indicates that the adversarial makeups generated by DiffAM are more natural-looking and have less impact on images at the pixel level. We also show the qualitative comparison of visual quality in <ref>. Note that for text-guided method CLIP2Protect, we use textual prompts, such as “purple lipstick with purple eyeshadow”, derived from the reference images to generate makeup. Compared to the noise-based method TIP-IM, DiffAM generates more natural-looking protected face images without noticeable noise patterns. As for makeup-based methods, AMT-GAN fails to transfer makeup precisely and the generated face images have obvious makeup artifacts. CLIP2Protect struggles to generate accurate makeup corresponding to the given textual prompt and loses most of image details. In contrast, DiffAM stands out for accurate and high-quality makeup transfer, such as lipstick and eyeshadow, thanks to fine-grained supervision of generation direction and distance. Our proposed operation for preserving makeup-irrelevant information also ensures that face image details are well-preserved. Notably, DiffAM is effective in generating makeup for male images, which is a challenge for other makeup-based methods. §.§ Attack Performance on Commercial APIs <ref> shows the quantitative results of attacks on commercial APIs Face++ and Aliyun. We randomly selected 100 images each from CelebA-HQ and LADN datasets to protect and report confidence scores returned from APIs. The confidence scores are between 0 to 100, where the higher score indicates higher similarity between the protected face image and the target image. DiffAM achieves the highest average confidence scores about 70 and 50 on each API and the attack effect is relatively stable across different datasets, which indicates the strong black-box attack capability in real-world scenarios. §.§ Ablation Studies Control of Makeup Direction. We verify the importance of makeup direction loss L_MT^dir for makeup quality in <ref>. In the absence of L_MT^dir, the generated makeup has obvious makeup artifacts (red boxes in <ref>), leading to a decrease in image quality. <ref> also illustrates that the generated images with L_MT^dir have better quantitative results than the ones without L_MT^dir. This is because, without L_MT^dir, the generated makeup is only guided by pixel-level makeup loss L_MT^px. L_MT^px just supervises makeup generation in different facial segmentation regions individually without global semantic supervision, resulting in inaccurate makeup generation. By applying makeup direction loss L_MT^dir, precise guidance can be provided for the global generation direction of the makeup, ensuring high-quality and accurate makeup. Preservation of Makeup-Irrelevant Information. <ref> shows the generated face images under a set of increasing inversion steps. With the increase of DDIM inversion steps, the generated face image has unexpected changes in facial attribute information. <ref> shows the quantitative results at different steps, indicating that it is a simple but effective operation to preserve makeup-irrelevant information by controlling DDIM inversion steps. Robustness on different makeup styles. Being able to generate protected face images with any given reference makeup holds more practical value. Thus, we randomly select five reference images from MT-dataset to evaluate the impact of different reference makeup styles on attack effects of DiffAM. As shown in <ref>, the change of makeup styles has limited influence on ASR, which indicates the robustness of DiffAM to the change of makeup styles. § CONCLUSION In this paper, we introduce DiffAM, a novel diffusion-based adversarial makeup transfer method for facial privacy protection. Building upon the generative capabilities of diffusion models, We innovatively introduce a makeup removal module to address uncertainty in text-guided generation. The deterministic cross-domain relationship can be obtained during makeup removal process, enabling fine-grained alignment guidance for adversarial makeup generation with the proposed CLIP-based makeup loss and ensemble attack strategy. Numerous experiments have verified that DiffAM ensures strong black-box attack capabilities against many FR models and commercial APIs, while maintaining high-quality and precise makeup generation. ieeenat_fullname § BACKGROUND: DDPM AND DDIM Denoising Diffusion Probabilistic Model (DDPM)<cit.> consists of a forward diffusion process and a reverse diffusion process. The forward diffusion process is described as a Markov chain where Gaussian noise is gradually added to the original image x_0 to get the noisy image x_t at every time steps t∈{1,...,T}: q(x_t|x_t-1)=𝒩(√(1-β_t)x_t-1,β_t𝐈), where β_t ∈ (0,1) are hyperparameters representing the variance schedule. A good property of this formulation is that we can directly sample x_t given x_0: q(x_t|x_0)=𝒩(√(α_t)x_0,(1-α_t)𝐈), x_t=√(α_t)x_0+√(1-α_t)ϵ, ϵ∼𝒩(0,𝐈), where α_t=∏_s=1^t(1-β_s). We can get a new sample from the distribution q(x_0) by following the reverse steps q(x_t-1|x_t), starting from x_T∼𝒩(0,𝐈). As the posteriors q(x_t-1|x_t) is intractable, in the reverse process, a neural network p_θ is trained to approximate it: p_θ(x_t-1|x_t)=𝒩(μ_θ(x_t,t),σ_t^2𝐈), where μ_θ(x_t,t)=1/√(1-β_t)(x_t-β_t/1-α_tϵ_θ(x_t,t)). A U-net<cit.> is trained to learn a function ϵ_θ(x_t,t) to predict the added noise at time step t by optimizing the objective<cit.>: min_θℒ(θ)=𝔼_x_0∼ q(x_0),ϵ∼𝒩(0,𝐈),t||ϵ-ϵ_θ(x_t,t)||_2^2. Then, we can sample the data as follows: x_t-1=μ_θ(x_t,t)+σ_tz, where z∼𝒩(0,𝐈). To accelerate the sampling process, Song <cit.> proposed Denoising Diffusion Implicit Model (DDIM) that has a non-Markovian noising process. The sampling process of DDIM is: x_t-1=√(α_t-1)f_θ(x_t,t)+√(1-α_t-1-σ_t^2)ϵ_θ(x_t,t)+σ_t^2z, where f_θ is the prediction of x_0 at t: f_θ(x_t,t)=x_t-√(1-α_t)ϵ_θ(x_t,t)/√(α_t). By setting σ_t=0 in <ref>, the sampling process from x_T to x_0 becomes deterministic, which is the principle of DDIM. Also, the operation of DDIM inversion can map x_0 back to x_T by reversing the process, enabling subsequent editing of images. Deterministic DDIM sampling and inversion can be expressed as: x_t-1 =√(α_t-1)f_θ(x_t,t)+√(1-α_t-1)ϵ_θ(x_t,t), x_t+1 =√(α_t+1)f_θ(x_t,t)+√(1-α_t+1)ϵ_θ(x_t,t). § TARGET IMAGES DiffAM aims to generate protected face images which mislead FR models into identifying them as the target identity. So we show the four target identities, provided by <cit.>, used for our experiments in <ref>. To simulate real-world attack scenarios, the target images used during training and testing are different. § ATTACK PERFORMANCE ON TENCENT API To fully evaluate attack effects of DiffAM in real-world scenarios, we also shows the quantitative results of attacks on Tencent API[<https://cloud.tencent.com/product/facerecognition>] here in <ref>. We randomly selected 100 images each from CelebA-HQ and LADN datasets to protect and report confidence scores returned from APIs. The confidence scores are between 0 to 100, where the higher score indicates higher similarity between the protected face image and the target image. DiffAM achieves the highest average confidence scores (48.34 and 38.57) campared to other methods. This further demonstrates that our precise guidance on adversarial makeup domain and robust adversarial makeup generation ensure high black-box transferability of protected face images generated by DiffAM. § MORE VISUAL RESULTS Text-guided Makeup Removal <ref> shows some visual results of text-guided makeup removal. The makeup of reference images, such as lipstick and eyeshadow, are clearly removed, indicating the powerful ability of text guidance in makeup removal. Then the difference in CLIP space between the makeup and non-makeup images can determine accurate makeup direction for subsequent makeup transfer. Image-guided Makeup Transfer Due to space limitations of the main text, more visual results of image-guided makeup transfer are shown in <ref>. DiffAM achieves precise makeup transfer for each given reference image and generates natural-looking protected face images, thanks to our precise control over makeup direction and distance.
http://arxiv.org/abs/2405.09125v1
20240515064143
HAAP: Vision-context Hierarchical Attention Autoregressive with Adaptive Permutation for Scene Text Recognition
[ "Honghui Chen", "Yuhang Qiu", "Jiabao Wang", "Pingping Chen", "Nam Ling" ]
cs.CV
[ "cs.CV", "cs.AI", "68T01", "I.2.10" ]
Journal of Class Files, Vol. 14, No. 8, January 2024 Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals HAAP: Vision-context Hierarchical Attention Autoregressive with Adaptive Permutation for Scene Text Recognition Honghui Chen, Yuhang Qiu, Jiabao Wang, Pingping Chen, Senior Member, IEEE, and Nam Ling, Life Fellow, IEEE Honghui Chen, Jiabao Wang, Pingping Chen are with the College of Physics and Information Engineering, Fuzhou University, Fuzhou, 350108, China (e-mail: chh5840996@gmail.com; wabbb0811@163.com; ppchen.xm@gmail.com). Yuhang Qiu is with the Faculty of Engineering, Monash University, Clayton, VIC, 3800, Australia (e-mail: yuhang.qiu@monash.edu). Nam Ling is with the Department of Computer Science and Engineering, Santa Clara University, Santa Clara, California, 95053, USA (e-mail: nling@scu.edu). Manuscript received January 31, 2024. ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Internal Language Model (LM)-based methods use permutation language modeling (PLM) to solve the error correction caused by conditional independence in external LM-based methods. However, random permutations of human interference cause fit oscillations in the model training, and Iterative Refinement (IR) operation to improve multimodal information decoupling also introduces additional overhead. To address these issues, this paper proposes the Hierarchical Attention autoregressive Model with Adaptive Permutation (HAAP) to enhance the location-context-image interaction capability, improving autoregressive generalization with internal LM. First, we propose Implicit Permutation Neurons (IPN) to generate adaptive attention masks to dynamically exploit token dependencies. The adaptive masks increase the diversity of training data and prevent model dependency on a specific order. It reduces the training overhead of PLM while avoiding training fit oscillations. Second, we develop Cross-modal Hierarchical Attention mechanism (CHA) to couple context and image features. This processing establishes rich positional semantic dependencies between context and image while avoiding IR. Extensive experimental results show the proposed HAAP achieves state-of-the-art (SOTA) performance in terms of accuracy, complexity, and latency on several datasets. Language models (LM), Autoregressive generalization, Multimodal information. § INTRODUCTION Scene Text Recognition (STR) aims to transcribe the text in an image into a computer-readable text format i.e., to recognize the localized text regions. Scene text with rich information plays a vital role in a range of applications such as visual quizzing, autonomous driving, image retrieval, augmented reality, retail, education, and visually impaired devices <cit.>. In contrast to Optical Character Recognition (OCR) in documents with more homogeneous text attributes, STR has to deal with irregular text, such as rotation, bending, blurring, or occlusion. In addition, noise and distortion issues make STR challenging. STR is primarily a visual task, involving two different modalities: image and text. Early work still tends to rely on a backbone pre-trained on unimodal data. Learning from data <cit.> is achieved by combining visual features with semantic priors from word representation models <cit.> or dictionaries <cit.> as well as sequence modeling <cit.>. However, there are inherent limitations in models based on visual information. Visual features alone are not sufficient in cases where the text part is unreadable, e.g., occlusion. Additionally, STR is a One-shot task that has data independence. And, textual semantics can assist visual models in recognizing heterogeneous data. Zhao et al. verified that language modeling (LM) can effectively focus on text and accurately determine the meaning of words <cit.>. LM demonstrates the special ability to perceive and understand text in natural images. For that, the combination of LM and visual models for sequence modeling is the current mainstream <cit.>. It is divided into two categories: external LM-based and internal LM-based. External LM-based schemes essentially accomplish STR by introducing external LMs to assist in correcting visual model reasoning <cit.>. External LM causes error correction own to the conditional independence of the input. The condition assumes that each pixel or character is independent, regardless of their interactions. However, the reality is that the order and spatial position of the characters about each other affect their meaning and grammatical structure. Algorithms that only consider each character by itself and ignore interactions can lead to prediction errors, especially when the input text has a complex structure and arrangement. Therefore, Permutation Language Modeling (PLM) is employed to improve the generalization of autoregressive (AR) modeling by parsing sequence meaning and syntactic structure <cit.>. PLM models can be viewed as a collection of AR models with shared architecture and weights. Token dependencies are specified dynamically through the use of an attention mask. Although randomized permutation increases the diversity of data, this strategy is accompanied by training fit oscillations since the random subset has the uncertainty of human interference. We also find that random permutation leads to unstable convergence of the model during training. In addition, if the model only performs well for specific alignments, it leads to a degradation of the generalization performance. Next, recent approaches have used Transformer encoder-decoder based for cross-modal visual and semantic information fusion <cit.>. However, They need to optimize the predictions through additional Iterative Refinement (IR). Essentially, internal LM is introduced in the decoder can eliminate IR, which is only an additional decoding because the visual and LM are jointly parsed. Considering that noisy inputs still have errors from the output of the visual model, IR is proposed to refine predictions from visual and language cues in external LM-based scenarios. Therefore, recent internal LM-based schemes do not propose an efficient way to couple cross-modal information in the decoder but rely on additional IR. This paper proposes the Hierarchical Attention AR Model with Adaptive Permutation (HAAP) to address these issues. The motivation is depicted. First, the use of an adaptive permutation instead of a random permutation scheme increases the diversity of the training data and prevents dependence on a specific order. It reduces the training overhead of the PLM while avoiding training fitting oscillations (as shown in Fig. <ref>). Second, Multi-head Attention (MHA) is introduced to hierarchically couple cross-modal information. Hierarchical processing improves the adaptation of visual features to semantics and thus enhances end-to-end joint parsing without additional IR (as shown in Fig. <ref>). First, inspired by traditional neurons and self-attentive mechanisms, this paper proposes an Implicit Permutation Neurons (IPN) to adapt the ordering between characters in the internal LM. The ordering and spatial location of the characters are considered rather than independently. IPN assigns a set of queries to the original left-to-right permutation to map it linearly into the high-dimensional space. Immediately, two sets of weight matrices are utilized for query weighting and inverse mapping. The process is still able to capture nonlinear relationships in the data with learnable weights. Second, this paper proposes Cross-modal Hierarchical Attention mechanism (CHA) to couple cross-modal information. Contexts embedded in the high dimensional space establish semantic dependencies through self-attention. The context interacts with the image features for the first time to context-image feature alignment. Next, the encoded location is matched with the context and then coupled with the image features again by mutual attention. Hierarchical feature processing drives the interactions of each item in the sequence while capturing the dependencies between locations using an image-guided context-assisted model. In addition, the location-context-image interaction is improved with location inter-correlation guided by adaptive mask and the uniform gradient flow. Therefore, the model avoids additional overhead by eliminating IR and achieves end-to-end parsing. The contributions of this paper can be summarized as follows: * This paper proposes an internal LM-based HAAP to unify end-to-end coupling of location coding, image features, and context to improve the generalization ability of the model in various STR scenes. * We propose a novel neuronal structure i.e. IPN to capture the dependencies between characters via adaptive masks efficiently. The model automatically learns the optimal alignment form by capturing the nonlinear relationships during the learning of weights without manual design. * We propose hierarchical feature processing CHA to motivate the interaction of each item in the sequence, together with adaptive masks to improve the location-context-image interaction. * HAAP achieves state-of-the-art (SOTA) results on the STR benchmark for all character sets as well as on larger, more difficult real datasets that include occluded and arbitrarily oriented text (as shown in Table. <ref>). Besides, HAAP also shows the cost-quality trade-offs in terms of parameters, FLOPs, and runtime usage (as shown in Table. <ref>). The rest of the paper is structured as follows: section II discusses related work. Section III describes the architecture of the proposed algorithm in detail. Section IV describes the experiments and analysis and the summary in Section V. § RELATED WORK In this section, we review current deep learning-based STR methods. Recent studies are mainly categorized into two directions, context-independent and context-relevant, based on whether textual semantics are considered or not. We summarize and discuss the research in these two directions separately. §.§ Context-independent STR Considering that STR is essentially a visual task, the researchers intuitively represent the inputs by constructing a visual model based on deep learning. This is the context-independent scheme i.e., category inference by summarizing visual features. Further, context-independent STR can be categorized into parallel-based reasoning and sequential reasoning. The parallel-based scheme only uses the visual features for prediction without considering the relationship between characters. Its main application is Fully Convolutional Network (FCN) <cit.> to segment characters at the pixel level. Liao et al <cit.> recognized characters by grouping segmented pixels into text regions. Wan et al <cit.> proposed an additional sequential segmentation map that records characters in the correct order. Multi-category classification is unable to accurately construct complete phrases because the output characters are conditionally uncorrelated with each other. Immediately, the scheme of sequence inference using Recurrent Convolutional Networks (RNN) was proposed to capture the correlation of characters <cit.>. The most typical is the Connectionist Temporal Classification (CTC) <cit.> based scheme. The RNN models the sequential sequence of features extracted by a convolutional neural network (CNN) and is trained end-to-end using CTC loss <cit.>. The RNN receives the current input and the previous hidden state at each time step and outputs the current hidden state. This sequential modeling structure allows the RNN to naturally adapt to the characteristics of sequential data, to parse the syntactic and semantic structures in the text. RNN is also used in a series of Attention-based <cit.> schemes. Several attempts have explored the enhancement of image encoding and decoding by employing different attentional mechanisms <cit.>. Character classification by aligning features and character positions, or converting STR into a multi-instance learning problem <cit.>. Sequence modeling was given a new lease of life with the introduction of transformer <cit.>. Transformer was then proposed as an alternative to RNN for sequence modeling <cit.>. However, context-independent methods that rely only on image features for prediction cannot address recognition in low-quality images, especially being less robust to corruption such as occlusion, distortion, or incomplete characters. This limitation motivates the use of language semantics to make recognition models more robust. §.§ Context-relevant STR Context-dependent STR is a typical cross-modal fusion scenario since the internal interactions between vision and language are constructed. Considering that textual semantics can assist visual models in processing heterogeneous data, the community has recently appeared with a large number of researches to assist recognition by constructing LMs <cit.>. VisionLAN <cit.> introduced character masking to guide the visual model to refer to language information in the visual context. MATRN <cit.> referenced context to enhance visual and semantic features based on spatial coding. STRT <cit.> designed an iterative text Transformer to predict the probability distribution in a sequence of characters. It is shown that robust models can be constructed by introducing external LMs for explicit modeling. Considering that character sequences are usually modeled in a left-to-right manner, the researcher develops the integrated model to capture twice the amount of information through the bi-directional LM <cit.>. SRN <cit.> combined the features of two unidirectional Transformers for prediction, which resulted in twice as expensive both computationally and parametrically. Therefore, ABINet <cit.> used a novel bidirectional completion network based on bidirectional feature representation. However, the conditional independence of external LM on image features limits its performance and makes error corrections. Subsequently, the internal LM was re-proposed for implicit language modeling to improve visual-contextual interaction. PARSeq <cit.> used PLM rather than standard AR modeling to learn internal LMs. It supports flexible decoding by using a parameterization that decouples the decoding location from the input context. CLIP4STR <cit.> introduced an encoder-decoder to process image and text information. The encoder is inherited from CLIP <cit.>, while the decoder uses Transformer. Internal LR-based schemes mitigate the problem of noisy inputs as well as increase the diversity of the data by dynamically specifying the sequence order through the use of an attention mask. However, the strategy of using randomized permutations introduces training fitting oscillations due to the uncertainty of randomized permutations with human interference. Furthermore, most of these context-dependent STR models use Transformer-based encoder-decoders to implement with IR. However, internal LM-based schemes rely on IR to address noisy inputs in external LM-based models. Thus, the decoding strategy of LM model-based schemes at this stage is inefficient and introduces additional computational effort. This paper proposes to use an adaptive sequence ordering scheme to increase the diversity of the training data and prevent the dependence of the model on a specific order. To further enhance the efficiency of cross-modal information coupling at the decoder side, MHA is developed to Hierarchical coupling information, improving the semantic adaptation of visual features. Ultimately, end-to-end joint parsing is enhanced to avoid additional IR. § METHODOLOGY In this section, we present in detail the principles and framework of our proposed Hierarchical Attention AR Model with Adaptive Permutation (HAAP), including Implicit Permutation Neurons (IPN) and Cross-modal Hierarchical Attention mechanism (CHA). §.§ Overview HAAP follows an encoder-decoder architecture as shown in Fig. <ref>. In the encoding phase, the image and text inputs are represented as a series of patches and semantic tokens. Subsequently, MHA is used for hierarchical image-context interaction and decoding. Specifically, first, the Transformer encoder <cit.> is utilized to establish the internal feature associations of the image patches. Second, the IPN assigns bi-directional and adaptive permutation masks to the textual context for adjusting sequence order. The contexts embedded into the high-dimensional features are first established by self-attention to establish internal associations. Then MHA is used for context and visual information interaction and position query association. Third, the associated information further interacts with the visual information by utilizing mutual attention. Next, the position query is linearly decoded into contextual output. §.§ Encoder Vision encoder Considering that ViT <cit.> divides images into patches and uses full connectivity to learn global relationships can better extract global and complex visual information than traditional CNNs <cit.>, HAAP follows the original ViT structure to build the visual encoder, which consists of L=12 layers of Transformer encoder. All layers share the same architecture as shown in Fig. <ref>. The output of the last encoder is subjected to a Layer Normalization (LN). Formally, first, the image of size H × W × C (height H, width W, channels C) is reshaped into a series of flat 2D patches x_p ∈ℝ ^N × (P^2 - C), where (P, P) is the resolution of each image block and N = HW/P^2 is the number of blocks generated. The patch x_p is flattened and mapped to D dimensions using a trainable linear projection E ∈ℝ ^(P^2C)× D. The position embedding E_pos∈ℝ ^(N+1) × D is added to the projected output z, as: z = [x^0_pE;x^1_pE;x^2_pE;⋯;x^N_pE;] + E_pos, where x^i_p represents the i-th x_p. z performs feature extraction in alternating MHA and Mult-Layer Perceptron (MLP). A normalized LN is applied initially in each block to mitigate internal covariate bias during training. Additionally, residual concatenation is applied twice in the block to aid gradient propagation in the depth model, as: z^'_l = MHA(LN(z_l-1)) + z_l-1, l∈ [1,L] z_l = MLP(LN(z^'_l)) + z^'_l, l∈ [1, L] where MLP contains two nonlinear layers with GELU activation functions. The z^'_l and z_l are the visually encoded information. MHA is the extension of scaled dot-product attention to multiple representation subspaces or heads. In scaled dot-product attention, the similarity scores are computed using dot-product computation of the similarity between two d_k-dimensional vectors query q, and key k. Attention scores are transformed by the d_v-dimensional vector values v. The scaled dot product attention is defined as: Attn(q,k,v) = softmax(qk^T/√(d_k))v, further, MHA(q,k,v) = Concat(head_1, ⋯ head_h)W^o, where head_i = Attn(qW^q_i, kW^k_i, vW^v_i), where the computational cost of MHA is practically constant due to the dimension of the vector d_head = d_model/h, where h is the number of heads. The W^q, W^k, W^v ∈ℝ^d_model× d_head are obtained by connecting the heads and multiplying with the output projection matrix W^o ∈ℝ^d_model× d_head. Context encoder The context uses a Tokenizer <cit.> for lowercase byte pair encoding BPE. The start, padding, and end of the text sequence are padded with [SOS], [PAD] and [EOS] tokens, respectively. §.§ Implicit Permutation Neurons (IPN) Considering that STR traditionally relies on left-to-right P_left2right or right-to-left P_right2left unidirectional sequence modeling, we propose IPN to improve the correlation between inputs and outputs through multidirectional sequence modeling. Essentially, multidirectional sequence modeling considers internal correlations while considering bi-directionality to avoid the effects of word syntactic structures or external disturbances such as occlusions and distortions. For example, to predict the "e" in the word "clearance", we consider "cl_" and "ecnara_". " in the context of "cl_ar". IPN provides an attention mask during the attention operation to generate dependencies between the input context and the output without actually replacing the text labels. Table. <ref> illustrates three examples of the mask. Formally, first, given the T-length text labels y = [y^1, y^2, ⋯, y^T], decompose the likelihood [1, 2, ⋯, T] according to the canonical order using the chain rule to build a left-to-right permutation of P_left2right to obtain the model inference as: log p(y|x) = ∑_t=1^Tlog p_θ(y_t|y_<t, x), where θ represents the model parameter and x is the input image. Second, to learn and extract complex structural properties from the data, P_query is assigned to T positions in the P_left2right sequence and projected to a high-dimensional space, as: M_t = P_left2right^⊺· P_query, where M_t ∈ℝ_T^T represents the intermediate matrix. Third, the model learns the optimal representation without manually designing features via a learnable weight matrix P_weight, as: M_w = M_t · P_weight, where M_w is the weighted transformation matrix. This process helps the model to parse the representation of the structure of the input data. Even though there is no explicit nonlinear activation function throughout the process, the model is still able to capture the nonlinear relationships in the data because of the learning of the weights. Fourth, the transformation matrix is inversely mapped to the sequence of scores P_score via P_value, as: P_score = M_w · P_value^⊺. Next, P_score is normalized from largest to smallest to generate the actual ranking order. Immediately, we invert the left-to-right permutations [1, 2, . , T] and invert the adaptive permutation pairs to obtain twice the amount of information. §.§ Cross-modal Hierarchical Attention mechanism (CHA) The standard Transformer parametric AR model is inefficient for decoding in multidirectional sequence modeling since the feature distribution predicted by the hidden state is independent of the target location <cit.>. Let us assume that we parameterize the next token distribution as: p_θ (x_rt = x | x_r<t) = exp(e(x)^⊺ h_θ (x_r<t))/∑x^'exp(e(x ')^⊺ h_θ (x_r<t)), where h_θ (x_r<t) denotes the hidden representation of x_r<t produced by the shared transformer network after proper masking. We use rt and r<t to denote the t-th and the first t-1 elements of the permutation set r. Since the same model parameter θ is shared between all decomposition sequences during training, rt is expected to see that every possible element in the sequence is not equal to x_t. Note that the representation h_θ (x_z<t ) does not depend on the position it will predict, i.e. the value of rt. Two different target positions share the same model predictions. However, the ground truth distributions for the two positions are certainly different. To avoid this problem, we perform context and location encoding based on a two-stream attention mechanism to provide complete contextual information. We use two sets of hidden representations i.e., from content and query representations. Meanwhile, considering that context and location need to interact with visual features cross-modally for alignment, we adopt a hierarchical coupling approach, i.e., CHA, to consider the context-location-image association. In the first stage, context-visual attention is established after self-attentive encoding of sorted contexts, as: Attn_cv = p + dropout[MHA(Attn_c,z)], where Attn_c = c + dropout[MHA(c,c,M)], where c ∈ℝ^(T +1) × d_model is the contextual embedding with positional information and M ∈ℝ^(T +1)× (T +1) is the attention mask. The total sequence length is increased to T + 1 by using special separator markers. Next, MHA is used to establish positional-contextual-visual attention, as: Attn_f = Attn_p +dropout[MHA(Attn_p,z)], where Attn_p = p + dropout[MHA(p, Attn_cv,M)]. The strategy of hierarchical feature processing prompts sequence interactions while utilizing image-guided context to assist the model in capturing dependencies between locations. The location-context-image interaction capability is boosted since adaptive permutation guides location interconnections. The robustness of the model is improved because the uniform gradient flow allows them to interact with each other during training. Finally, the output logits y^' are obtained as: y^' = Linear(Attn_f) ∈ R^(T + 1)×(S+1), where S is the size of the character set used for training. Additional characters are associated with the [E] tag (the end of the tagging sequence). The complete training loss for the optimization phase is the average of the K cross-entropy loss ℒ_ce with attention mask, as: ℒ = 1/k∑_k=1^Kℒ_ce(y^'_k, y), where y^'_k represents the k-th output logit and y is the real label. K=4 is the number of permutations. § EXPERIMENTS §.§ Dataset This experiment uses both synthetic training datasets MJSynth (MJ) <cit.> and SynthText (ST) <cit.> as well as real data for training including nine real-world datasets (COCO-Text (COCO) <cit.>, RCTW17 <cit.>, Uber-Text (Uber) <cit.>, ArT <cit.>, LSVT <cit.>, MLT19 <cit.>, ReCTS <cit.>, TextOCR <cit.> and OpenVINO <cit.>). We use IIIT 5k-word (IIIT5k) <cit.>, CUTE80 (CUTE) <cit.>, Street View Text (SVT) <cit.>, SVT-Perspective (SVTP) <cit.>, ICDAR 2013 (IC13) <cit.> and ICDAR 2015 (IC15) <cit.>, COCO, Uber, ArT for testing. The dataset contains challenging texts such as multi-orientation, curvature, perspective blur, low resolution, distortion, and occlusion. Synthetic datasets MJ consists of 8.9 million photorealistic text images composed of artificial datasets through over 90k English dictionaries. The composition of MJ consists of three parts background, foreground, and optional shadow/border respectively. It uses 1400 different fonts. Additionally, the font word spacing, thickness, underlining, and other attributes of MJ are different. MJ also utilizes different background effects, border/shadow rendering, base shading, projection distortion, natural image blending, and noise. ST consists of 8 million word text images synthesized by blending text over natural images. It uses scene geometry, textures, and surface normals to naturally blend and distort text renderings on the surfaces of objects in the image. Similar to MJ, ST uses randomized fonts for its text. The text image is cropped from the natural image in which the synthesized text is embedded. Real-world datasets The COCO dataset has a total of 73,127 training and 9.8k test images containing non-text, legible text, illegible text, and occluded text images. Each image contains at least one instance of legible text. RCTW contains 12,263 annotated images of the large-scale Chinese field dataset. The Uber dataset has a total of 127,920 training 60,000 test images collected from Bing Maps Streetside. It contains house numbers and vertical and rotated text on sign boards. The ArT dataset was created to recognize arbitrarily shaped text containing 32,028 training and 26,000 testing images of perspective, rotated, or curved arbitrarily shaped text. SVT is a large-scale street scene text dataset collected from streets in China totaling 41,439 text images of arbitrarily shaped natural scenes. MLT19 was created to recognize multilingual text. It has a total of 56,727 training images consisting of seven languages: Arabic, Latin, Chinese, Japanese, Korean, Bengali and Hindi. The ReCTS dataset contains 26,432 irregular texts arranged in various layouts or written in unique fonts. TextOCR and OpenVINO are large datasets with very diverse images containing 818,087 and 2,071,541 images respectively. They have complex scenes with multiple objects and text of different resolutions, orientations, and qualities. The IIIT5k dataset is a collection of 5000 natural scene and digitally born text images crawled from Google image search containing 2,000 images for evaluation and 3,000 images for testing. It has low resolution, multiple font styles, light and dark transformations, and projection distortion. CUTE contains 288 cropped images for the curved text collection. The images were captured by digital cameras or collected from the Internet. SVT is a collection of street view text from Google Street View. It contains 257 images for evaluation and 647 images for testing. Further, the SVTP dataset is a more complex collection of images containing numerous perspective texts totaling 645 images for testing. IC13 and IC15 were created for the ICDAR Robust Reading Contest. IC13 contains 848 images for training and 1,015 images for evaluation. IC15 contains 4,468 images for training and 2,077 images for evaluation. Many of the samples contain perspective, and blurred text. §.§ Implementation Details The entire training and testing is implemented in two RTX 3090 GPUs with 24GB of RAM based on the Pytorch library <cit.>. The training is performed with a batch size of 1024 for 63,630 iterations i.e. 20 epochs on a real dataset of 3,257,585 samples and 4 epochs on a synthetic dataset of 16.89M samples. Following previous work <cit.>, the Adam <cit.> optimizer is used along with the 1cycle <cit.> learning rate scheduler with an initial learning rate of 7e-5. We set a maximum label length of T = 25 and used a character set of size S = 94 containing mixed-case alphanumeric characters and punctuation. The image is enhanced, resized, and normalized. First, the image enhancement is done using a three-layer RandAugment <cit.> operation where sharpening is tuned to Gaussian Blur and Poisson Noise. Second, the image is unconditionally resized to 128 × 32 pixels, and patches of 8 × 4 size are used. For the model testing phase, we use a 36-character set i.e. containing numbers and 26 letters. We use word accuracy as the main evaluation metric i.e., the prediction is considered correct if and only if the characters match at all positions. Remark The highlighted red and red lines in the visualization results indicate error recognition and omissions, respectively. The best performance result is shown in bold font. §.§ Comparison with State-of-the-Arts We compare HAAP with recent methods on 6 public benchmarks, and the quantitative results are listed in Table. <ref>. We can see that HAAP outperforms the benchmark Parseq <cit.>, realizing SOTA performance. The recognition performance in IC15 (incident scene text), SVTP (perspective scene text), and CUTE80 (curved text line image) are improved by 1.1%, 1.7%, and 1% respectively. Our model achieves 99.3% recognition accuracy on the IIIT5K (distorted, low-resolution scene text) dataset. This is an improvement of 2.6% compared to the latest method, HVSI <cit.>. This verifies the excellent performance of HAAP on irregular text datasets, which contain a large number of low-quality images such as noisy and blurred images. Besides validating the model on small-scale public benchmarks, we evaluate HAAP on three larger and more challenging recent benchmarks consisting of irregular text with different shapes, low-resolution images, rotations, and occlusions. The results are shown in Table. <ref>. It achieves 80.6%, 85.5%, and 85.8% outperforms the previous methods in terms of accuracy, respectively. Representative visible results are shown in Fig. <ref>, which demonstrates that our method is sufficiently robust to occlusion and text direction variability. In detail, we find that HAAP and recent methods are effective in solving complex background problems caused by perspective shifts or uneven lighting. However, artistic lettering and occlusion can lead to errors in previous work. The independence of heterogeneous data leads to ambiguous judgments by the recognizer, which can be effectively avoided by HAAP. Furthermore, HAAP can protect against additional distortions and ambiguities. For example, although the second half of "CORNER" in Fig. <ref> is ambiguous, our model can still reason effectively. HAAP also recognizes that the "R" in "SHARP" in Fig. <ref> relies on AR reasoning with adaptive permutations. In addition, Fig. <ref> and Table. <ref> shows the trade-off between accuracy and cost (parameters, FLOPS, and latency). For concise representation, we use Avg to denote the average accuracy, which is computed as a weighted average based on the number of datasets. HAAP achieves the highest average word accuracy and exhibits competitive cost quality in all three metrics. Compared to PARSeq, HAAP uses far fewer parameters and FLOPs. In terms of latency, HAAP achieves Avg-6 of 95.0% with 15.2 ms/image, which outperforms previous methods. §.§ Ablation study To evaluate the effectiveness of the proposed HAAP with IPN and CHA, we conduct ablation studies on nine benchmark datasets. In these experiments, all models are trained using real datasets with consistent settings of their model hyper-parameters. The effectiveness of IPN To study the impact of adaptive permutations generated by IPN, we compare the sequence modeling strategies using left-to-right, bi-directional, and PLM. From Table. <ref>, we find that the model using the bidirectional can achieve about 3.4% improvement in Avg-9 compared to the left-to-right strategy. This proves that the double amount of information brought by bidirectional sequence modeling can effectively improve the correlation ability between sequence contexts. The introduction of PLM allows the model to achieve 90.12% performance in terms of Avg-9 along with an additional 4 sets of random permutations. Immediately, the model with IPN achieves 0.28% performance gain in terms of Avg-9 compared to the model with PLM. Meanwhile, as shown in Fig. <ref>, the model with IPN outperforms the introduction of PLM in terms of stability. This indicates that adaptive multi-directional sequence modeling can avoid the effects caused by the grammatical structure of words or by external disturbances. It also prevents uncertainty with human interference. The Comparison of qualitative results is shown in Fig. <ref>. The effectiveness of CHA Considering that the unified gradient flow allows the modules to interact with each other, we evaluate CHA and MHA while evaluating their impact in different sequencing sequences. As shown in Table. <ref>, we find that the model with joint PLM and MHA (PLM+MHA) requires additional IR to improve the generalization. The PLM+CHA model achieves 90.15% performance in terms of Avg without additional IR, outperforming the performance of the PLM+MHA model with 3 number of refinement iterations (90.12%). In addition, compared to the PLM+MHA model, the IPN+CHA model achieves a performance gain of 0.59% without the need for refinement iterations. Besides, we find that the contribution of IR is minimal by introducing CHA in any sequence modeling (PLM or IPN). Because of the uniform gradient flow, the IPN+CHA model achieves the performance of 90.40% and gets a performance gain of 0.25% compared to the PLM+CHA model. This suggests that CHA can enhance the model generalization by providing effective positive feedback to the adaptive permutation. The qualitative results of IPN and CHA are shown in Fig. <ref>. We find that the introduction of IPN and CHA makes the model not only effective for arbitrary orientation character recognition but also has strong generalization ability for low-quality and occlusion. §.§ Limitation and discussion Although the proposed method can accurately recognize complex texts of arbitrary shapes in most cases, it still struggles with four kinds of tiny texts (image pixels not exceeding 2000): i:) front and back view severely confused; ii:) front and back text sticking together caused by blurring over long distances; iii:) shape-shifting with multiple occlusions; and iv:) alternating overlapping. Fig. <ref> shows examples of such failure cases. The model loses attention on details during learning as the representation of character features declines in sequences. § CONCLUSION To improve the autoregressive generalization based on internal LM, this paper proposes HAAP to enhance the location-context-image interaction capability by designing IPN and CHA. First, IPN is used to parse sequence meaning and syntactic structure by dynamically specifying token dependencies with an adaptive attention mask. The adaptive AR subset increases the diversity of the training data and prevents model dependency on specific sequences. The training overhead of PLM can be reduced and training fitting oscillations is avoided. Second, CHA module for hierarchical feature processing is proposed to exploit position-context-image semantic dependencies in the sequence without IR. The experimental results validate the effectiveness and the SOTA performance of of HAAP in terms of accuracy, complexity, and latency. In the future, we expect to extend HAAP to end-to-end detection and recognition, where the detector can be embedded in the visual decoder since the strong region perception is exhibited using the self-attention mechanism. § ACKNOWLEDGMENT This work was supported by the Natural Science Fund of China under Grants 62171135, Industry-University Research Project of Education Department 2022. IEEEtran
http://arxiv.org/abs/2405.09225v1
20240515100501
Exploring Ground States of Fermi-Hubbard Model on Honeycomb Lattices with Counterdiabaticity
[ "Jialiang Tang", "Ruoqian Xu", "Yongcheng Ding", "Xusheng Xu", "Yue Ban", "Manhong Yung", "Axel Pérez-Obiol", "Gloria Platero", "Xi Chen" ]
quant-ph
[ "quant-ph" ]
Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain Institute for Quantum Science and Technology, Department of Physics, Shanghai University, Shanghai 200444, China Department of Physics, State Key Laboratory of Low-Dimensional Quantum Physics, Tsinghua University, Beijing 100084, China Departamento de Física, Universidad Carlos III de Madrid, Avda. de la Universidad 30, 28911 Leganés, Spain Department of Physics, Southern University of Science and Technology, Shenzhen 518055, People's Republic of China Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, People's Republic of China Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, People's Republic of China Shenzhen Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, People's Republic of China Departament de Física, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain Instituto de Ciencia de Materiales de Madrid (CSIC), Cantoblanco, E-28049 Madrid, Spain xi.chen@ehu.eus Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain EHU Quantum Center, University of the Basque Country UPV/EHU, Barrio Sarriena, s/n, 48940 Leioa, Biscay, Spain Exploring the ground state properties of many-body quantum systems conventionally involves adiabatic processes, alongside exact diagonalization, in the context of quantum annealing or adiabatic quantum computation. Shortcuts to adiabaticity by counter-diabatic driving serve to accelerate these processes by suppressing energy excitations. Motivated by this, we develop variational quantum algorithms incorporating the auxiliary counter-diabatic interactions, comparing them with digitized adiabatic algorithms. These algorithms are then implemented on gate-based quantum circuits to explore the ground states of the Fermi-Hubbard model on honeycomb lattices, utilizing systems with up to 26 qubits. The comparison reveals that the counter-diabatic inspired ansatz is superior to traditional Hamiltonian variational ansatz. Furthermore, the number and duration of Trotter steps are analyzed to understand and mitigate errors. Given the model’s relevance to materials in condensed matter, our study paves the way for using variational quantum algorithms with counterdiabaticity to explore quantum materials in the noisy intermediate-scale quantum era. Exploring Ground States of Fermi-Hubbard Model on Honeycomb Lattices with Counterdiabaticity Xi Chen May 20, 2024 ============================================================================================ § INTRODUCTION Efficiently simulating complex quantum systems stands as a pivotal capability of quantum computers  <cit.>, often regarded as a potent tool for investigating quantum materials <cit.>, chemistry <cit.>, biology <cit.>, and nuclear physics <cit.>, all of which are governed by the principles of quantum mechanics. This proposal has transitioned from a conceptual idea to a practical endeavor with the development of experimental technologies in the noisy intermediate-scale quantum (NISQ) era. Among the most celebrated algorithms in this context is the variational quantum algorithm (VQA) <cit.>, which aims to reduce circuit depth by integrating quantum computers with classical optimizers. The circuit utilizes parameterized blocks of quantum gates as ansatz, tailored for different scenarios. Hardware-efficient ansatz address hardware limitations  <cit.>, while unitary coupled clusters are optimized for quantum chemistry and condensed matter systems  <cit.>. These ansatz, derived from the system's Hamiltonian, form the core of Variational Quantum Algorithms (VQAs), facilitating efficient exploration and optimization of quantum states. A straightforward application of VQA is in ground state searching, a critical aspect for quantum information processing and many-body physics. Exact diagonalization, while theoretically accurate, suffers from exponentially growing computational complexity, making it impractical for large-scale systems. However, this complexity can be mitigated by employing variational circuits to approximate the ground state through energy expectation minimization. This underscores the requirement for designing an adequate ansatz to facilitate finding the ground state, while increasing expressibility thus potentially reducing the impact from the notorious barren plateau caused by hardware-efficient ansatz, large system size, and deep circuits <cit.>. In accordance with the adiabatic theorem, a system is expected to evolve while preserving its ground state integrity as long as the adiabatic criteria remain unviolated <cit.>. This ansatz mirrors the structure observed in digital adiabatic quantum computing, reminiscent of the quantum-approximated optimization algorithm (QAOA), which seeks to identify optimal annealing schedules <cit.>. This naturally leads to the assumption that one can introduce counterdiabaticity in the design of ansatz to enhance the performance of VQAs, akin to the approach seen in adaptive derivative-assembled problem tailored (ADAPT) VQAs <cit.>. This incorporation of counterdiabaticity aims to cancel induced energy excitation <cit.>, effectively serving as shortcuts to adiabaticity <cit.>. Most of these terms, involving many-body interactions, are difficult or even impossible to directly realize in realistic quantum systems, due to their complexity. However, this limitation no longer exists in ansatz design due to gate decomposition. By incorporating counter-diabatic (CD) interactions into VQAs, one can significantly improve performance at the same energetic cost or gate numbers, as recently demonstrated through rigorous benchmarking <cit.>. These digitized counter-diabatic quantum algorithms (DCQAs) have successfully determined the ground state of simpler many-body models and have found application in interdisciplinary fields such as molecular docking <cit.> and protein folding <cit.>, showcasing advantages over conventional AQAs <cit.>. In this article, we propose DCQAs to explore the ground state of the Fermi-Hubbard (FH) model on a honeycomb lattice. The extension beyond other lattice geometries introduces additional geometric complexity and distinctive electronic properties. Following the model description, we utilize DCQAs to solve for its ground state energy, and compare their performance against digital adiabatic algorithms aided by counterdiabaticity. Our results include a comprehensive analysis of Trotter error, adiabatic error, and ground state properties. Additionally, for larger system sizes, we simulate VQAs with up to 26 qubits, assessing the performance of both Hamiltonian variational ansatz and CD-inspired ansatz. Our results provide the possible feasibility of utilizing quantum computing in many-body physics and quantum materials research. § PRELIMINARIES ON MODEL AND HAMILTONIAN The FH model is a celebrated model in condensed matter physics for characterizing how electron interactions influence the behavior of quantum materials <cit.>. When extended to honeycomb lattices, this model introduces distinctive geometric aspects, offering the theoretical framework to study electronic phenomena and quantum phases in materials exhibiting hexagonal symmetry. By employing quantum algorithms, we can simulate this model on quantum computers, probing its ground state that encodes crucial information regarding the stable electronic arrangement and intrinsic properties of quantum materials within this geometric context. The system's Hamiltonian reads Ĥ_FH = -τ∑_⟨ i,j⟩, σc_i,σ^† c^† _j,σ + U∑_i c_i,↑^† c_i,↑^† c_i,↓^† c_i,↓^†, where Ĥ_h = -τ∑_⟨ i,j⟩, σc_i,σ^† c^† _j,σ and Ĥ_c= U∑_i c_i,↑^† c_i,↑^† c_i,↓^† c_i,↓^† are hopping and Coulomb terms respectively. Here, the notation ⟨ i,j ⟩ restricts adjoint sites on lattice, spin σ∈{↑,↓}, c_i,σ^† (c_i,σ^†) denotes creation (destruction) of an electron on site i with the spin σ, τ and U are respectively the hopping and Coulomb coupling coefficient. The sub-Hamiltonian is explained as follows: the hopping term Ĥ_h describes the action when an electron hops from site i to site j while keeping the spin σ. The Coulomb term Ĥ_c characterizes the Coulomb potential generated by electrons occupying the same site. The Hamiltonian for hopping Ĥ_h includes both horizontal and vertical terms, as depicted in Fig. <ref>(a) along the x and y axes. The lattice is arranged in a honeycomb pattern, with each site possessing two orbitals: one for spin up and the other for spin down. The hexagons are all regular and arranged in a zig-zag configuration. Until now, various methods have been proposed to map fermionic operators onto quantum computers, among which the Jordan-Wigner (JW) and Bravyi-Kitaev (BK) mappings stand out, for transformation of electronic states and operators to states of and operations upon qubits in hybrid classical-quantum algorithms <cit.>. For our proposal, the JW mapping is a favorable choice as it satisfies the canonical anticommutation relations of fermionic operators, {a_i^†, a_j^†}=δ_ij, where i,j are arbitrary sites in the honeycomb lattice, and it is cost-effective and feasible in the scalability of systems with dozens of qubits. The constructed qubit operators and the fermionic operators need to satisfy the following relation given by the JW transformation method: c_j^† ↦1/2(X_j - iY_j)Z_1⋯ Z_j-1, c_j ↦1/2(X_j + iY_j)Z_1⋯ Z_j-1, where c_j^† (c_j ) is the creation (destruction) operator of fermions, X_j, Y_j, Z_j are Pauli operators. In the honeycomb lattice with a single-layer structure, the orientation can be zig-zag to match the qubit and site numbers required for a comprehensive lattice description. Additionally, two qubits are assigned to each lattice point, as shown in Fig. <ref>(b), corresponding to the two orbitals present, namely spin up or down. For instance, fermions possessing spin up and down on site number 2 correspond to qubit numbers 4 and 5, respectively. § SPEEDING UP ADIABATIC ALGORITHM There exist many algorithms successfully implemented on the strongly correlated electron system. In this section, we initially evaluate the performance of digitized AQAs, complemented by CD driving, serving as a reference for subsequent comparisons with DCQAs. Both approaches begin with the preparation of the initial state as the ground state of a suitable mixing Hamiltonian. Subsequently, we digitize the evolution of the Hamiltonian using quantum circuits. Finally, we measure the expectation values of joint spin operators to reconstruct the system's energy at the final time step. §.§ Initial state preparation The initial state for adiabatic evolution is chosen to be the ground state of Ĥ_h which is a fermionic Gaussian state and can be efficiently prepared on a quantum computer. As shown in Fig. <ref>(a), we start from the occupied orbits with the lowest energy, half-full configuration. X gates need to be applied on half of the spin-orbits to create one electron in each orbital. Then Givens rotation gates can be adjusted to prepare the initial state <cit.>. In the context of a honeycomb lattice, the complexity of preparing the initial state scales as 𝒪(N^2_site), where N_site represents the number of lattice sites. As the lattice size increases, the computational complexity becomes more closely associated with the number of qubits required, scaling as 𝒪(4N^2_q), indicating a quadratic increase in complexity with respect to the number of qubits. §.§ Adiabatic evolution and its acceleration For an adiabatic quantum process, the system gradually evolves from the ground state of the initial Hamiltonian to that of the final Hamiltonian. The Hamiltonian is given by Ĥ_a(t) = [1-λ(t)]Ĥ_i + λ(t)Ĥ_f, where Ĥ_i and Ĥ_f represent the initial and final Hamiltonian, respectively. The scheduling function λ(t) varies within the interval [0,1] when t∈ [0, T], where T is the total evolution time. According to the adiabatic theorem, if T is sufficiently large (inversely proportional to the energy gap), the system will remain in its instantaneous eigenstate. The choice of Ĥ_i is crucial, and it is typically selected based on the ease of constructing its ground state on a quantum computer. In the case of the FH model, the initial state Ĥ_i is the ground state of the hopping term Hamiltonian Ĥ_h <cit.> described in Eq. (<ref>). On the other hand, Ĥ_f corresponds to the full FH Hamiltonian Ĥ_FH, as explained in Eq. (<ref>). By defining Ĥ_i = H_h and Ĥ_f = H_FH, the Hamiltonian (<ref>) can be expressed as Ĥ_a(t) = Ĥ_h + λ(t)Ĥ_c. However, adiabatic quantum computing is inherently limited by its temporal demands, rendering the system vulnerable to noise and decoherence. To mitigate this, CD driving <cit.> (or equivalently transitionless algorithm <cit.>), one of shortcut-to-adiabaticity techniques <cit.>, can suppress non-adiabatic transitions, yielding the following Hamiltonian <cit.>: Ĥ_tot = Ĥ_a(t) + Ĥ_CD, where Ĥ_CD = λ̇Â_λ, with Â_λ being the gauge potential. While there are various techniques to compute the CD Hamiltonian, obtaining exact CD terms for many-body systems is challenged by exact diagonalization. To address this, the nested commutator (NC) method <cit.> is proposed to approximate CD term. The gauge potential approximation reads Â_λ^(l) = i∑_k=1^lα_k(t)[Ĥ_a, [Ĥ_a,⋯[Ĥ_a_2k-1,  ∂_λĤ]]], where l denotes the order of the NC expansions, and becomes the exact CD term when l →∞. After minimizing the action S = [G^2], where G = ∂_λĤ - i[Ĥ, Â_λ], we determine the coefficient α_k(t) for achieving the gauge potential. To implement the evolution of quantum circuits, we utilize the first-order Trotter-Suzuki formula. The unitary evolution operator can be discretized into N steps with a timestep length of δ t (see Appendix <ref>). The quantum circuits for implementing adiabatic algorithms with CD terms involved are illustrated in Fig. <ref>(b). In quantum computing, an optimal scenario involves a substantial value for Trotter steps N. However, increased quantum circuit depth also increases susceptibility to gate error and decoherence. In addition, the errors in the Trotter approximation become more pronounced when N assumes a relatively small value. §.§ Measurement Ideally, obtaining detailed information about the quantum state, such as through quantum state tomography, allows for the calculation of the expected value of the Hamiltonian to determine the system's energy. In the context of the FH model, this involves reconstructing the expected value from the measured probabilities of the quantum state under newly defined basis vectors, achieved by diagonalizing each sub-Hamiltonian. In the JW transformation, the Coulomb interaction terms are represented by matrices in the form 1/4(I-Z_i)(I-Z_i+1), where the diagonal matrix elements can be measured directly in the computational basis. The expectation value of Coulomb terms is determined by the sum of probabilities for each site being in the state |11⟩. The commutativity of all Coulomb terms allows for their simultaneous measurement. For hopping terms, particularly vertical hopping terms, measuring the corresponding Pauli string of qubits, represented by 1/2(X_iX_j + Y_iY_j)Z_i+1⋯ Z_j-1, poses a challenge due to its complexity. The Fermi-SWAP gate is utilized to transform from non-neighbouring qubits into nearest neighbouring ones, thereby eliminating the unnecessary Pauli Z operators in the string. To measure the hopping energy between qubits i and j, the Hamiltonian 1/2(X_iX_j + Y_iY_j) is diagonalized using a few quantum gates <cit.>. The measurement outcome is then expressed as the probability difference between the states |01⟩ and |10⟩. In the study of a 1 × 1 honeycomb lattice, determining the total energy involves summing probabilities from four distinct groups of measurements. The first group captures Coulomb terms, representing the local energy contributions at individual lattice sites. The next two groups encompass horizontal hopping terms, with the first set covering qubit pairs (0,2), (1,3), (6,8), (7,9) and the second set involving pairs (2,4), (3,5), (8,10), (9,11). These pairs represent the kinetic energy contributions from electrons hopping between adjacent sites horizontally. The fourth group targets vertical hopping terms involving pairs (4,7), (5,6), (0,11), (1,10), which account for vertical interactions between sites. To accurately reconstruct the lattice's total energy, simulations have conducted using the adiabatic evolution with and without CD driving, in which measurements are taken for 30,000 times at each T and N for 10 instances. This extensive measurement strategy aims to mitigate statistical fluctuations and enhance the precision of the energy calculation, suggesting that even more measurements might be necessary for further accuracy. §.§ Simulation and error analysis Due to limitations in quantum resources, simulating large-scale grid models using adiabatic algorithms poses challenges. The simulation results discussed here focus on estimating the ground energy of a 1× 1 honeycomb lattice, utilizing quantum circuits through adiabatic algorithms with and without CD interactions. The simulation begins from the ground state of an initial Hamiltonian, denoted by |ψ_i⟩, which corresponds to the hopping term Hamiltonian Ĥ_h, and aims to reach the ground state of the full FH Hamiltonian Ĥ_FH, represented by |ψ_f⟩. The annealing schedule used here is λ(t)=sin^2[π/2sin^2(π t/2T)] <cit.>, which modulates the interpolation between the initial Hamiltonian Ĥ_h and the final Hamiltonian Ĥ_FH over the evolution time T. CD terms, incorporating additional terms to suppress excitation that deviate from the ground state, demonstrate a significant advantage by achieving the ground energy in a shorter time. As shown in Fig. <ref>(a), this advantage is particularly profound before the time T<2.5, where the CD terms effectively prevent excitation energy from the ground one. The analysis specifically considers first-order NC CD terms, including two-body and many-body interactions, obtained by the method detailed in Appendix <ref>. In Fig. <ref>(b), with the time step δ t = 0.02 and total evolution time T=1, increasing the Trotter number results in a remarkable reduction of ground energy for counter-adiabatic evolution. This suggests that incorporating higher-order CD terms could further refine the ground energy estimation of the FH model, albeit at the cost of increased circuit complexity. The error bars noted in both Fig. <ref>(a) and (b) reflect the statistical uncertainty arising from multiple measurements, emphasizing the probabilistic nature of quantum measurements in these simulations. To further justify the advantage, we analyze the errors for digitized adiabatic algorithms in terms of various δ t and N. We define the error by the distance from ground energy to the estimated energy, expressed as: Δ E = | ⟨ψ_g|Ĥ_FH|ψ_g⟩-⟨ψ(T)|Ĥ_FH|ψ(T)⟩/⟨ψ_g|Ĥ_FH|ψ_g⟩ - ⟨ψ_i|Ĥ_FH|ψ_i⟩| × 100%. where |ψ_g⟩ and |ψ_i⟩ are the ground state of the FH model and initial Hamiltonian, respectively, and |ψ(T)⟩ represents the evolved state at T= N×δ t in the circuit, depending on the Trotter step N and Trotter time δ. Fig. <ref>(c) illustrates the relative errors for digitized adiabatic evolution and its improvement by CD interactions. The advantage of CD driving is apparent when T is relatively small, and both approaches can approximate the ground state energy effectively when T is relatively large. However, it is notable that with fewer Trotter steps (N=5), the adiabatic error is larger, especially when δ t=0.1. In contrast, the CD-assisted evolution demonstrates better suppression of the Trotter error. Indeed, the results are consistent with the analysis of digitized CD driving, demonstrating its advantages in both computational cost and performance in the literature <cit.>. In examining quantum circuit complexities for simulating honeycomb lattice configurations within the FH model, a distinct variation emerges between adiabatic and non-adiabatic protocols with CD terms. In conventional adiabatic evolution, the complexity depends linearly on the lattice site count, N_site. On the contrary, non-adiabatic evolution assisted by CD terms incurs an additional complexity order of 𝒪(N_site), effectively doubling the gate count relative to adiabatic evolution. This escalation in complexity, while facilitating expedited convergence to the system's ground state, poses increased demands on quantum computational resources. Therefore, the trade-off between simulation speed and quantum gate numbers requires careful consideration, particularly in the context of gate fidelity and operational coherence times. Moreover, we have compared adiabatic and CD-assisted non-adiabatic evolution at the same gate number, as depicted in Fig. <ref>. In our analysis, we find that within the range of 7 Trotter steps, comprising approximately 6500 quantum gates, counter-diabatic (CD) driving outperforms adiabatic evolution. However, as the number of quantum gates increases, the Trotter error surpasses the adiabatic error, leading to the inefficiency of CD. This reduced efficiency of the CD method at equivalent depths is due to its inherently higher complexity, which negatively affects its performance compared to the adiabatic approach when both are subjected to the same number of gate constraints. This suggests that there is potential for further investigation into the optimal compilation process to reduce the number of CNOT gates and consequently the complexity of the circuits <cit.>. Rather than implementing the entire Hamiltonian specified in Eq. (<ref>) within quantum circuits, we finally check the evolution of Hamiltonian with only CD term. As illustrated in Fig. <ref> (a)-(b), considering only CD terms initially accelerates the system. However, over time, it fails to reach the ground state compared to cases involving adiabatic or non-adiabatic protocols assisted with CD terms. This discrepancy arises from the fact that approximate CD terms introduce large adiabatic error as well as non-negligible Trotter error <cit.>, while exact CD driving reproduces the dynamics of Ĥ_tot (<ref>), up to a phase factor <cit.>, without any adiabatic error. Therefore, this observation provides further insight into the trade-off between circuit depth (complexity) and errors (accuracy). § VARIATIONAL QUANTUM ALGORITHM Unlike AQAs, which fix parameters, VQAs optimize parameters using classical optimizers, for instance, the Adagrad optimizer used here. This distinction allows for the implementation of more efficient and shallower quantum circuits, facilitating the determination of ground state energy within the reduced evolution time. Therefore, in this section, we turn to the study of VQAs for our proposal with CD interaction. In general, the quantum state |ψ(θ)⟩ is generated by applying the unitary operator to initial state, e.g., |ψ(θ)⟩ = U(θ)|ψ_0⟩, where U(θ) can be implemented by quantum circuits and θ=(θ_0, θ_1, ⋯, θ_m) are the parameters that need to be optimized. The cost function can be defined as C(θ) = ⟨ψ(θ)|H_p|ψ(θ)⟩, where H_p is the problem Hamiltonian. The VQA algorithm aims to obtain the optimal parameters θ^*, which will minimize the cost function and ultimately result in achieving the lowest energy possible. In what follows we will introduce two types of variational ansatz: Hamiltonian variational (HV) ansatz and CD-inspired ansatz, as suggested by DAQAs. <cit.> §.§ Hamiltonian variational ansatz The HV ansatz is inspired by the adiabatic theorem, which explains that a sufficient slow evolution will maintain the ground state of the system <cit.>. The quantum circuits utilized here closely follow the method proposed in Ref. <cit.>, serving as essential components in the implementation of adiabatic evolution. However, a notable challenge arises in the current landscape of quantum computing, particularly in implementing the vertical terms of the hopping Hamiltonian across non-adjacent qubit locations. To tackle the complexity associated with non-adjacent qubits in the hopping terms, a key element — the Fermi-SWAP gate is introduced to transform them into adjacent qubits. This gate proves to be a crucial tool for efficiently swapping qubits, thereby transforming non-adjacent qubit interactions into adjacent ones. More importantly, the incorporation of a phase factor with a value of -1 ensures the preservation of Fermi properties during the interchange of two-qubit indices. Alongside this, the matrix representation of the Fermi-SWAP gate is given by: FSWAP = ( [ 1 0 0 0; 0 0 1 0; 0 1 0 0; 0 0 0 -1 ]), which facilitates the exchange. For the hopping Hamiltonian (Ĥ_h), when the qubits involved in the hopping term are adjacent, the evolution operator and its matrix representation can be depicted as follows: U_h(β) = e^-i βΣ̂_h = [ 1 0 0 0; 0 cosβ -isinβ 0; 0 -isinβ cosβ 0; 0 0 0 1 ], where Σ̂_h= 1/2(X_0X_1 + Y_0Y_1 ). Given that the qubit indices for each Coulomb term (Ĥ_c) are adjacent, representing two spins at the same site, SWAP operations are unnecessary. Therefore, the matrix representation of its evolution operator U_c(γ) can be written as U_c(γ) = e^-iγΣ̂_c = [ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 e^-iγ ], where Σ̂_c = 1/4(I-Z_0-Z_1+Z_0Z_1 ). Both parameters β and γ are determined by the classical optimizer, as they are essential components of VQAs. The circuits corresponding to HV ansatz are depicted in Fig. <ref>(c), where the Coulomb and hopping terms are present in the circuit. The Fermi-SWAP operation acts on the circuit and modifies the order of qubits. In pursuit of enhanced flexibility for optimization, we adopt a strategy wherein every Pauli operator is encoded with a free parameter. §.§ CD-inspired ansatz The structure of the CD-inspired ansatz is expressed as follows: U_CD(θ) = e^-iθΣ̂c, where Σ̂c are obtained from the NC method, corresponding to the gauge potential Â_λ. In general, combining Hamiltonian variational ansatz with CD-inspired ansatz in VQAs is possible, but it often leads to a significant increase in circuit depth and complexity. Since CD driving can mimic the full dynamics of the Hamiltonian, up to the dynamic phase <cit.>, we can exclusively employ CD-inspired ansatz for our purposes. This offers the significant benefit of eliminating many terms from the original Hamiltonian, allowing for the implementation of lower-depth circuits compatible with near-term quantum devices. This approach utilizes operators selected from the CD pools, with each term incorporating its own set of free parameters. These parameters are subject to optimization via a classical optimizer, facilitating an efficient exploration of the parameter space to achieve optimal simulation results. To achieve this, the first-order CD operator pool needs to be computed, including two-body interactions, four-body interactions, and many-body interactions. In the VQA algorithms, we solely include the two-body interactions as fundamental and feasible elements. These chosen terms, denoted as X_0Y_2, Y_2X_0, …, X_9Y_11, Y_11X_9, correspond to specific Pauli terms, representing the crucial interactions within the system. A schematic representation of the CD-inspired ansatz utilized for the FH model is depicted in Fig. <ref>(d), illustrating the structure with two-body interactions. §.§ Classical optimizer In VQA, the optimization of parameters is crucial and is accomplished by a classical computer aiming to minimize a cost function, thereby identifying the optimal set of parameters. Indeed, various optimizers, such as Adam, Adagrad, and SGD can also be utilized for various tasks, each offering distinct advantages and disadvantages <cit.>. Therefore, the choice of optimizers depends on factors such as the nature of the optimization problem, the structure of the neural network, and computational resources available. The Adagrad optimizer, a widely recognized optimization technique in the realms of machine learning and deep learning, offers an efficient approach to parameter optimization by dynamically adjusting the learning rate for each parameter, as detailed in  <cit.>. The core principle of Adagrad lies in its mechanism to monitor and accumulate the squared gradients for each parameter over time. This accumulation process is particularly advantageous in handling sparse data and scenarios where the gradient magnitude varies significantly across parameters. By adapting learning rates on an individual parameter basis, Adagrad emerges as a robust and versatile optimization algorithm, especially in contexts where parameter sensitivities differ markedly. Mathematically, the Adagrad optimizer computes the updates for the parameters using the following formulas: θ_t+1 = θ_t - η/√(G_t + ϵ)·∇ J(θ_t), where η is the global learning rate, G_t is the accumulated squared gradient for the parameter θ up to iteration t, and ϵ is a small constant added for numerical stability. §.§ Simulation To evaluate the performance of VQA in determining the ground energy of the FH model structured on a honeycomb lattice, we conduct a comparative analysis of the HV ansatz and CD-inspired ansatz. For a fair comparison, we utilize quantum circuits consisting of a single layer for each type of ansatz. Employing the Adagrad optimizer, known for its effectiveness in adapting learning rates based on the parameter's gradient history, we set the learning rate (η) at 0.05. Due to the significant influence of initial parameters on the outcomes, we initialize these parameters using a uniform distribution within the interval (0, 1). This random initialization as the starting point ensures an unbiased assessment of each ansatz's ability to efficiently approximate the ground state energy of the model. In simulations involving 1× 1, 1× 2, and 2×1 honeycomb lattices requiring 12, 20, and 26 qubits respectively, the performances of HV ansatz and CD-inspired ansatz are compared. Results, as depicted in Fig. <ref>(a)-(c), indicate that CD-inspired ansatz yields a closer approximation to the ground state energy than the HV ansatz. Furthermore, the CD-inspired ansatz achieves convergence significantly faster than the HV ansatz. When subjected to various noise sources, such as amplitude damping, bit flip, and phase flip noise, each with a probability of 0.01, the HV ansatz encounters challenges in achieving convergence, as depicted in Fig. <ref>(d)-(f). On the contrary, the CD-inspired ansatz exhibits robustness, maintaining convergence even in the presence of noise. This underscores its potential effectiveness in noisy quantum computing environments. Next, we compare the quantum circuit complexities across different honeycomb lattice configurations. The number of two-body interactions in the CD-inspired ansatz directly correlates with the number of grid points, denoted as N_CD = 2N_site - 4. Given that each two-body operator XY can be decomposed into 7 basic quantum gates shown in Fig. <ref>(e), the total number of quantum gates for the CD-inspired ansatz is approximately 𝒪(N_CD). On the other hand, for the HV ansatz, the quantum circuit's architecture consists of three main components: the hopping evolution operator, the Coulomb evolution operator, and the fermionic SWAP unit. This structure remains consistent across various honeycomb lattice sizes, with the total quantum gate count adhering to the formula N_HV = N_Coulomb + N_hopping + N_swap. The gate count for each component is inherently linked to the number of lattice sites. Specifically, N_Coulomb equals the number of sites N_site, N_hopping is twice the number of sites 2N_site, and N_swap is a function of the lattice configuration, calculated as 2∑_i∈ x^y( N_site^i(N_site^i/2 + (N_site^i-2)/2)). Here, the SWAP operation counts, N_site^i/2 and (N_site^i-2)/2, account for two distinct types of SWAP gates required within the lattice. As the size of the honeycomb lattice increases, the scaling of circuit complexity between the CD-inspired ansatz and the HV ansatz diverges significantly. In the case of CD-inspired ansatz, the complexity scales linearly with the number of sites, denoted as 𝒪(N_site), reflecting a direct correlation between lattice size and resource requirements. Conversely, for the HV ansatz, the complexity exhibits exponential growth, represented as 𝒪(N_site^2), due to the quadratic increase in SWAP operations as the lattice expands. This different scaling behavior implies CD-inspired ansatz is more efficient for larger lattices, whereas the HV approach has the dilemma of requesting a large demand of quantum gates. Finally, let us compare the results shown in Figs. <ref> and <ref>, highlighting the advantages and disadvantages of VQAs and AQAs, both assisted by CD driving. Due to the random initialization, the search space in VQA is enlarged, potentially leading to suboptimal results compared to adiabatic evolution. Consequently, the ground energy calculated from VQAs becomes inferior. On the other hand, AQAs demand significant computational resources, especially when simulating large-scale quantum systems. For example, through our estimation, lattice configurations like 1× 2 and 2× 1 require approximately 50,000 quantum gates, while a 1× 3 lattice needs about 60,000 quantum gates (up to 26 qubits), even without CD interactions involved. These challenges render the applicability of AQAs impractical on near-term quantum devices. § CONCLUSION AND OUTLOOK In summary, our study detailed in the paper has utilized CD interaction to compute the ground energy of the honeycomb lattice within the FH model, using both AQAs and VQAs. To facilitate the simulation of one-dimensional (1 × 1 and 1 × 2) and two-dimensional (2 × 1) honeycomb lattices up to 26 qubits with manageable circuit depth, we have introduced the CD-inspired ansatz, drawn from the concept of shortcuts to adiabaticity. This choice, different from conventional HV ansatz, has exhibited superior performance, particularly under various noise conditions. These findings not only contribute to the accurate calculation of quantum material energy within specific regions but also suggest potential applications in exploring properties of materials such as artificial graphene <cit.>, high-temperature superconductors <cit.> and quantum dots <cit.>, possibly with the extended FH model. In addition, we have demonstrated the efficacy of our approaches compared to traditional AQAs and VQAs in accurately determining ground state energy. Comprehensive analysis and error evaluation suggest that the performance of both algorithms is improved by CD driving, allowing for a reduction in circuit depth and consequently a decrease in the number of gates required. More specifically, the introduction of CD-inspired ansatz within the VQA framework improves effectiveness in accurately approximating ground state energies by reducing circuit depth and complexity, especially under various noise conditions. Although the accuracy in the case of a 1× 1 honeycomb lattice structure is worse than that of the AQA, the CD-inspired ansatz offers a promising avenue for mitigating challenges associated with large-scale qubits. The latter allows us to obtain the ground state of a 2 × 1 honeycomb lattice structure with certain accuracy and shallower quantum circuits, using systems with up to 26 qubits. Indeed, AQAs and VQAs have their own pros and cons. Our results indicate that VQAs even with CD-inspired ansatz suffer the disadvantage, such as sensitivity to initialization and barren plateaus <cit.>, commonly encountered in optimization landscapes. Unlike AQAs, VQAs lack rigorous theoretical guarantees of convergence to the ground state, making their performance highly dependent on the choice of ansatz and optimization method. Moving forward, these suggest that Krylov space <cit.> could be useful for selecting the ansatz, particularly for large-scale quantum systems. Despite these challenges, we remain hopeful that with advancements in quantum hardware capabilities, our methods could play a crucial role in the comprehensive study of a broader range of materials, offering enhanced insights into their quantum mechanical properties and potentially contributing to the development of new materials with desirable electronic characteristics. This work is supported by the Basque Government through Grant No. IT1470-22, the project grant PID2021-126273NB-I00 funded by MCIN/AEI/10.13039/501100011033, by “ERDFA way of making Europe", “ERDF Invest in your Future", EU FET Open Grant EPIQUS (899368), HORIZON-CL4-2022-QUANTUM-01-SGA project 101113946 OpenSuperQPlus100 of the EU Flagship on Quantum Technologies, the Spanish Ministry of Economic Affairs and Digital Transformation through the QUANTUM ENIA project call-Quantum Spain project, NSFC (12075145 and 12211540002), the Innovation Program for Quantum Science and Technology (2021ZD0302302), and the China Scholarship Council (CSC) under Grant Nos: 202206890003, 202306890004. Y.B. acknowledges support from the Spanish Government via the project PID2021-126694NA-C22 (MCIU/AEI/FEDER, EU). G.P. was supported by Spain’s MINECO through Grant No. PID2020- 117787GB-I00 and by Spanish National Research Council (CSIC) Research Platform PTI-001. X.C. acknowledges ayudas para contratos Ramón y Cajal–2015-2020 (RYC-2017-22482). § CALCULATION OF CD TERMS Within the CD operator pool for the FH model, NCs generate various terms, including two-body, four-body, and many-body interactions. While two-body interactions are less resource-intensive, incorporating many-body interactions corresponding to high-order NCs is crucial for enhancing simulation accuracy. This appendix delves into the specifics of NCs within the FH model, emphasizing their role in accurately capturing the system's dynamics. Despite the increased quantum resources required for many-body interactions, they are essential for a comprehensive simulation, providing a deeper understanding of the model's complex behaviors and interactions. Now, we only consider two sites with qubit index (0,2) and (1,3). The Hamiltonian is H = a^†_0a_0a^†_1 a_1+a^†_2a_2a^†_3 a_3 + a^†_0a_2 + a^†_2a_0 + a^†_1a_3 + a^†_3a_1. With JW transformation, the creation and annihilation operator will be transformed into a qubit operator, such that H_c = 2I-Z_0-Z_1-Z_2-Z_3+Z_0Z_1+Z_2Z_3, H_h = 0.5(Y_0Z_1Y_2 + X_0Z_1X_2 + Y_1Z_2Y_3 + X_1Z_2X_3). Since the initial Hamiltonian is chosen as the hopping Hamiltonian, the NC, is given by the hopping Hamiltonian and Coulomb Hamiltonian, that is, H_CD = [H_c, H_h] = X_0Y_2 + Y_0X_2 + X_0Z_1X_2Z_3 + Y_0Z_1X_2Z_3 + X_1Y_3 + Y_3X_1 + Z_0X_1Z_2Y_3 + Z_0Y_1Z_2X_3, When incorporating vertical hopping terms into the hopping Hamiltonian, the resulting CD Hamiltonian (H_CD) inevitably encompasses higher-order interaction terms. This complexity arises from the necessity to counteract the non-adiabatic transitions induced by these additional hopping interactions. The JW transformation, when applied to this expanded Hamiltonian, retains the intrinsic relationships between fermionic operators. This transformation effectively maps the fermionic creation and annihilation operators to Pauli matrices, preserving the algebraic structure of the fermionic system within a spin-based framework. Consequently, when computing the NCs from the original Hamiltonian  (<ref>), the JW transformation ensures that the outcome remains consistent, reflecting the preserved fermionic operator relationships. § DIGITIZED QUANTUM CIRCUITS In this appendix, we will introduce how to digitalize adiabatic quantum computing. For a system whose Hamiltonian is H(t) undergoes an evolution, the continuous time evolution operator can be formulated by U(0, T) = 𝒯 exp[-i∫_0^T dt Ĥ(t)], where the 𝒯 is the time-ordering operator. The discretization of the evolution operator can be designed using the Trotter-Suzuki approximation U(0,T) ≈∏_j=1^p∏_m=1^M exp{-iH_m(jΔ t)Δ t }, where H(t) can be decomposed into M local terms and p refers to the number of circuit layers. If p is large enough, the evolution U(0, T) can generate an exact target state. Discretizing the adiabatic evolution, as implemented in superconducting circuits <cit.>, also enables counter-adiabatic evolution. Fig. <ref>(a)-(c) delineates the circuit decomposition for various evolution operators discussed within this work. Specifically, for the hopping evolution operator U_h(β), the quantum circuits incorporate rotation Z gates and √( iSWAP) gates, illustrating the implementation strategy for hopping interactions. In contrast, the Coulomb evolution operator U_c(γ) extends the circuit complexity by including both rotation X and Z gates alongside √( iSWAP) gates, reflecting the additional requirements for simulating Coulomb interactions within the quantum framework. The construction of the Fermi-SWAP operator, as shown in part (d) of Fig. <ref>, employs rotation Z gates and √( iSWAP) gates, underscoring the methodology for effectuating fermion swapping within the lattice simulation. Lastly, Fig. <ref>(e) also exemplifies the decomposition of two-body interaction terms specific to the CD-inspired ansatz, such as the operator e^-iX_0X_2t/ħ. This decomposition utilizes basis quantum gates, including the Hadamard gate, Rotation gate, and CNOT gate, to effectively simulate the desired interaction within the quantum circuit framework. This detailed decomposition showcases the versatility and complexity of quantum circuit design necessary to accurately simulate the dynamic interactions characteristic of the FH model on a honeycomb lattice, facilitating a deeper understanding of quantum material properties. 999 2019_Smith_npj Smith, A., Kim, M. S., Pollmann, F., & Knolle, J. Simulating quantum many-body dynamics on a current digital quantum computer, https://doi.org/10.1038/s41534-019-0217-0 Npj Quantum Inf. 5, 106 (2019). Iulia2009Science Buluta1, I. & Nori, F. Quantum simulators, https://www.science.org/doi/full/10.1126/science.1177838?casa_token=uSXQkvU3jr8AAAAA Science 326, 108-111 (2009). Chris2020PRB Cade, C., Mineh, L., Montanaro, A., & Stanisic, S. Strategies for solving the Fermi-Hubbard model on near-term quantum computers, https://link.aps.org/doi/10.1103/PhysRevB.102.235122 Phys. Rev. B 102, 235122 (2020). Dave2015PRA Wecker, D. et al. Solving strongly correlated electron models on a quantum computer, https://link.aps.org/doi/10.1103/PhysRevA.92.062318 Phys. Rev. A 92, 062318 (2015). Stanisic2022NC Stanisic, S. et al. Observing ground-state properties of the Fermi-Hubbard model using a scalable algorithm on a quantum computer, https://doi.org/10.1038/s41467-022-33335-4 Nat. Commun. 13, 5743 (2022). Nathalie_2021_Nature de Leon, N. P. et al. Materials challenges and opportunities for quantum computing hardware, https://www.science.org/doi/10.1126/science.abb2823 Science 372, 2823 (2021). Ma_2020_npj Ma, H., Govoni, M., & Galli, G. Quantum simulations of materials on near-term quantum computers, https://doi.org/10.1038/s41524-020-00353-z Npj Comput. Mater. 6, 85 (2020). Bela_2020_Chemical Bauer, B., Bravyi, S., Motta, M., & Chan, G. K-L. Quantum algorithms for quantum chemistry and quantum materials science, https://doi.org/10.1021/acs.chemrev.9b00829 Chem. Rev. 120, 22, 12685-12717 (2020). ObiolPRA2022 Pérez-Obiol, A., et al. Adiabatic quantum algorithm for artificial graphene, https://doi.org/10.1103/PhysRevA.106.052408 Phys. Rev. A 106, 052408 (2022). Benjamin_2022_JRSoc Cordier, B. A., Sawaya, N. P. D., Guerreschi, G. G., & McWeeney, S. K. Biology and medicine in the landscape of quantum advantages, https://doi.org/10.1098/rsif.2022.0541 J. R. Soc. Interface 19, 1920220541, 12685-12717 (2022). Lanyon_2010_Nature Lanyon, B. P., et al. Towards quantum chemistry on a quantum computer, https://doi.org/10.1038/nchem.483 Nat. Chem. 2, 106–111 (2010). Yudong_2019_Chem Cao, Y.-D. et al. Quantum Chemistry in the Age of Quantum Computing, https://doi.org/10.1021/acs.chemrev.8b00803 Chem. Rev. 119, 19, 10856–10915 (2019). Sam_2020_RMP McArdle, S., Endo, S., Aspuru-Guzik, A., Benjamin, S. C., and Yuan, X. Quantum computational chemistry, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.92.015003 Rev. Mod. Phys. 92, 015003 (2020). Fedorov2021_NC Fedorov, A. K. & Gelfand M. S. Towards practical applications in quantum computational biology, https://doi.org/10.1038/s43588-021-00024-z Nat. Comput. Sci. 1, 114–119 (2021). Marx2021_NC Marx, V. Biology begins to tangle with quantum computing, https://doi.org/10.1038/s41592-021-01199-z Nat. Methods. 18, 715–719 (2021). nuclear Pérez-Obiol, A., Romero, A. M., Menéndez, J., Rios, A., García-Sáez, A., & Juliá-Díaz, B., Nuclear shell-model simulation in digital quantum computers, https://www.nature.com/articles/s41598-023-39263-7Sci. Rep. 13, 12291 (2023). Cerezo2021NRP Cerezo, M. et al. Variational quantum algorithms, https://doi.org/10.1038/s42254-021-00348-9 Nat. Rev. Phys. 3, 625–644 (2021). Wecker2015PRA Wecker, D., Hastings, M. B., & Troyer, M. Progress towards practical quantum variational algorithms, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.92.042303 Phys. Rev. A 92, 042303 (2015). Lennart2021PRL Bittel, L. & Kliesch, M. Training Variational Quantum Algorithms Is NP-Hard, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.127.120502 Phys. Rev. Lett. 127, 120502 (2021). Tyson2019PRA Jones, T., Endo, S., McArdle, S., Yuan, X., & Benjamin, S. C., Variational quantum algorithms for discovering Hamiltonian spectra, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.99.062304 Phys. Rev. A 99, 062304 (2019). Kandala2017Nature Kandala, A. et al. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets, https://www.nature.com/articles/nature23879 Nature 549, 242 (2017). Romero2018QST Romero, J., Babbush, R., McClean, J. R., Hempel, C., Love, P. J. & Aspuru-Guzik, A. Strategies for quantum computing molecular energies using the unitary coupled cluster ansatz, https://iopscience.iop.org/article/10.1088/2058-9565/aad3e4 Quantum. Sci. Technol. 4, 014008 (2018). Shen2017 Shen, Y.-C. et al, Quantum implementation of the unitary coupled cluster for simulating molecular electronic structure, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.020501 Phys. Rev. A 95, 020501 (2017). McClean2018NC McClean, J. R., Boixo, S., Smelyanskiy, V. N., Babbush, R. & Neven, H. Barren plateaus in quantum neural network training landscapes, https://doi.org/10.1038/s41467-018-07090-4 Nat. Commun. 9, 4812 (2018). Pesah2021PRX Pesah, A., Cerezo, M., Wang, S., Volkoff, T., Sornborger, A. T. & Coles, P. J. Absence of Barren Plateaus in Quantum Convolutional Neural Networks, https://journals.aps.org/prx/cited-by/10.1103/PhysRevX.11.041011 Phys. Rev. X 11, 041011 (2021). RevModPhys Albash, T. & Lidar, D. A. Adiabatic quantum computation, https://link.aps.org/doi/10.1103/RevModPhys.90.015002 Rev. Mod. Phys. 90, 015002 (2018). Chandarana2022PRR Chandarana, P. et al. Digitized-counterdiabatic quantum approximate optimization algorithm, https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.4.013141 Phys. Rev. Res. 4, 013141 (2022). Sophia1 Grimsley, H. R. et al. An adaptive variational algorithm for exact molecular simulations on a quantum computer, https://doi.org/10.1038/s41467-019-10988-2Nat. Commun. 10, 3007 (2019). Sophia2 Zhu, L. et al. Adaptive quantum approximate optimization algorithm for solving combinatorial problems on a quantum computer, 10.1103/PhysRevResearch.4.033029Phys. Rev. Rsearch 4, 033029 (2022). Rice Demirplak, M. & Rice, S. A., Adiabatic population transfer with control fields, https://pubs.acs.org/doi/10.1021/jp030708aJ. Phys. Chem. A 107, 9937 (2003). Berry Berry, M. V. Transitionless quantum driving, https://doi.org/10.1021/jp030708aJ. Phys. A: Math. Theor. 42, 365303 (2009). Chen2010PRL Chen, X. , et al. Fast optimal frictionless atom cooling in harmonic traps: Shortcut to adiabaticity, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.104.063002 Phys. Rev. Lett. 104, 063002 (2010). Odelin2019RMP Guéry-Odelin, D., Ruschhaupt, A., Kiely, A., Torrontegui, E., Martínez-Garaot, S. & Muga, J. G. Shortcuts to adiabaticity: Concepts, methods, and applications, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.91.045001 Rev. Mod. Phys. 91, 045001 (2019). Xu2024Arxiv Xu, R.-Q. et al. Benchmarking hybrid digitized-counterdiabatic quantum optimization, https://link.aps.org/doi/10.1103/PhysRevResearch.6.013147Phys. Rev. Research, 6, 013147 (2024). Ding2023Arxiv Ding, Q.-M., Huang, Y.-M. & Yuan, X. Molecular docking via quantum approximate optimization algorithm, https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.21.034036 Phys. Rev. Applied 21, 034036 (2024) Pranav2023PRA Chandarana, P., Hegade, N. N., Montalban, I., Solano, E. & Chen, X. Digitized Counterdiabatic Quantum Algorithm for Protein Folding, https://link.aps.org/doi/10.1103/PhysRevApplied.20.014024 Phys. Rev. Appl. 20, 014024 (2023). Narendra2021PRApplied Hegade, N. N. et al. Shortcuts to adiabaticity in digitized adiabatic quantum computing, https://link.aps.org/doi/10.1103/PhysRevApplied.15.024038 Phys. Rev. Appl. 15, 024038 (2021). Esslinger2010FermiHubbardPW Esslinger, T. Fermi-Hubbard Physics with Atoms in an Optical Lattice, https://api.semanticscholar.org/CorpusID:119274107 Annu. Rev. Condens. Matter Phys. 1, 129-152 (2010). Jordan Nielsen, M. A. The Fermionic canonical commutation relations and the Jordan-Wigner transform, https://api.semanticscholar.org/CorpusID:199373281Technical Report, University of Queensland, 2005. Bravyi Bravyi, S. B. & Kitaev, A. Y. Fermionic Quantum Computation, https://doi.org/10.1006/aphy.2002.6254Ann. Phys. 298, 210–226 (2002). Seeley Seeley, J. T., Richard, M. J. & Love, P. J. The Bravyi-Kitaev Transformation for quantum computation of electronic structure, https://doi.org/10.1063/1.4768229J. Chem. Phys. 137, 224109. (2012). Tranter Tranter, A. et al. The Bravyi-Kitaev transformation: Properties and applications, https://doi.org/10.1002/qua.24969Int. J. Quantum Chem. 115, 1431–1441 (2015). McClean2020QST McClean, J. R et al, OpenFermion: the electronic structure package for quantum computers, https://iopscience.iop.org/article/10.1088/2058-9565/ab8ebc Quantum. Sci. Technol. 5, 034014 (2020). Hatomura1 Hatomura1, T. Scaling of errors in digitized counterdiabatic driving, https://iopscience.iop.org/article/10.1088/1367-2630/acfd51/metaNew J. Phys. 25, 103025 (2023). ji Ji, Y., Koenig, K. F. & Polian, I. Improving the Performance of Digitized Counterdiabatic Quantum Optimization via Algorithm-Oriented Qubit Mapping, https://arxiv.org/abs/2311.14624 arXiv preprint arXiv: 2311.14624 (2023). mindquantum Mindquantum team and collaborators, https://gitee.com/mindspore/mindquantum Open Source software at https://gitee.com/mindspore/mindquantum. Jiang2018PRA Jiang, Z. et al. Quantum algorithms to simulate many-body physics of correlated Fermions, https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.9.044036 Phys. Rev. Appl. 9, 044036 (2018). Sels Sels, D. & Polkovnikov, A. Minimizing irreversible losses in quantum systems by local counterdiabatic driving, https://doi.org/10.1073/pnas.1619826114Proc. Natl. Acad. Sci. USA 114, E3909 (2017). Pieter2019PRL Claeys, P. W., Pandey, M., Sels, D. & Polkovnikov, A. Floquet-Engineering Counterdiabatic Protocols in Quantum Many-Body Systems, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.123.090602 Phys. Rev. Lett. 123, 090602 (2019). duchi2011adaptive Duchi, J., Hazan, E. & Singer, Y., Adaptive subgradient methods for online learning and stochastic optimization, http://jmlr.org/papers/v12/duchi11a.html J. Mach. Learn. Res. 12, 2121-2159 (2011). dong Dong, X. et al. Mechanism of superconductivity in the Hubbard model at intermediate interaction strength, https://doi.org/10.1073/pnas.2205048119PProc. Natl. Acad. Sci. USA 119, e2205048119 (2022). wang Wang, X., et al. Experimental realization of an extended Fermi-Hubbard model using a 2D lattice of dopant-based quantum dots, https://www.nature.com/articles/s41467-022-34220-wNat. Commun. 13, 6824 (2022). Larocca Larocca, M. et al. Diagnosing barren plateaus with tools from quantum optimal control, https://doi.org/10.22331/q-2022-09-29-824Quantum 6, 824 (2022). krylov Takahashi, K. & del Campo, A. Shortcuts to adiabaticity in Krylov space, https://link.aps.org/doi/10.1103/PhysRevX.14.011032Phys. Rev. X 14, 011032 (2024). Barends_Nature2016 Barends, R. et al, Digitized adiabatic quantum computing with a superconducting circuit, https://doi.org/10.1038/nature17658 Nature 534, 222–226 (2016).
http://arxiv.org/abs/2405.08880v1
20240514180055
Direct CKM determination from W decays at future lepton colliders
[ "David Marzocca", "Manuel Szewc", "Michele Tammaro" ]
hep-ph
[ "hep-ph", "hep-ex" ]
=1
http://arxiv.org/abs/2405.09827v1
20240516055603
Parallel Backpropagation for Shared-Feature Visualization
[ "Alexander Lappe", "Anna Bognár", "Ghazaleh Ghamkhari Nejad", "Albert Mukovskiy", "Lucas Martini", "Martin A. Giese", "Rufin Vogels" ]
cs.CV
[ "cs.CV", "cs.LG" ]
PillarNeXt: Improving the 3D detector by introducing Voxel2Pillar feature encoding and extracting multi-scale features Xusheng Li, Chengliang Wang, Shumao Wang, Zhuo Zeng, and Ji Liu Xusheng Li, Chengliang Wang, Zhuo Zeng, and Ji Liu are with Chongqing University, Chongqing, China (E-mail: lixusheng@cqu.edu.cn; wangcl@cqu.edu.cn; zengz@cqu.edu.cn; liujiboy@cqu.edu.cn). Shumao Wang is with Zhengjiang University, Zhejiang, China (E-mail: maomao123@zju.edu.cn). May 20, 2024 =================================================================================================================================================================================================================================================================================================================================================================== High-level visual brain regions contain subareas in which neurons appear to respond more strongly to examples of a particular semantic category, like faces or bodies, rather than objects. However, recent work has shown that while this finding holds on average, some out-of-category stimuli also activate neurons in these regions. This may be due to visual features common among the preferred class also being present in other images. Here, we propose a deep-learning-based approach for visualizing these features. For each neuron, we identify relevant visual features driving its selectivity by modelling responses to images based on latent activations of a deep neural network. Given an out-of-category image which strongly activates the neuron, our method first identifies a reference image from the preferred category yielding a similar feature activation pattern. We then backpropagate latent activations of both images to the pixel level, while enhancing the identified shared dimensions and attenuating non-shared features. The procedure highlights image regions containing shared features driving responses of the model neuron. We apply the algorithm to novel recordings from body-selective regions in macaque IT cortex in order to understand why some images of objects excite these neurons. Visualizations reveal object parts which resemble parts of a macaque body, shedding light on neural preference of these objects. § INTRODUCTION Background. The primate visual system has evolved to process a highly diverse set of tasks and stimuli. In higher visual areas, specialized subregions exist, in which neurons preferentially discharge in response to images stemming from a particular semantic class. These areas are often hypothesized to underlie specific computations that may only be relevant for a specific semantic concept, but it is not entirely clear why they emerge. The most prominent category-selective brain regions consist of 'face-cells' ('face patches'), which on average fire more strongly when stimulated with faces than objects <cit.>, as well as body-selective regions which have been studied to a lesser extent <cit.>. For face-selective cells, it has been shown that object images also elicit responses, albeit more sparsely than face images <cit.>. Going further, <cit.> showed that models trained solely on non-face images could predict responses to face images in macaque face cells. Therefore, it has been argued that face-selective cells are not driven by the semantic concept of a face, but instead respond to visual features that are more common in face than object images. Hence, out-of-category (ooc.) images may still activate otherwise category-selective cells, as long as relevant features are apparent in the image. As of yet, characterization of these features has remained relatively rough, largely confined to the finding that face cells tend to prefer round objects and body cells tend to prefer spiky objects <cit.>. Other work suggests that only a small fraction of response variance in face cells can be explained by simple shape features like roundness <cit.>. While these global tuning properties are useful to understand average preferences of patches, we therefore argue that more fine-grained feature characterizations are necessary to understand responses at the single-image and single-neuron level. Contribution. To address this gap, we propose a deep-neural-network-based method for visualizing features of an ooc. stimulus, which are responsible for eliciting high responses from an otherwise category-selective neuron. Our approach relies on analyzing the similarity of the image to a selected within-category image which displays similar features, as shown in Fig. <ref>. Specifically, after finding a within-category image with similar, neuron-specific features, we use gradient methods to highlight features extracted from a vision model which are shared between the two images and drive the neural response. This helps to visually answer the question why a feature detector that is preferably activated by within-class images, would also respond to the ooc. image in question. The procedure is entirely class-agnostic, and can therefore be used for any selective brain region, as well as for studying internal behaviour of artificial vision models. In this work, we show results for a novel set of multi-unit recordings from body-selective regions in macaque superior temporal sulcus. Body cells constitute a particularly interesting problem, as different body poses produce vast variability among body images, suggesting a rich set of features driving these neurons. We summarize our contributions as follows: * We present a novel method for visualizing features driving responses of category-selective neurons to out-of-category images, shedding light on why these neurons do not exclusively respond to within-category images, * We present results from multi-unit recordings from body-selective neurons in macaque IT cortex, demonstrating that these neurons encode overlapping visual features for bodies and objects, * We apply the proposed visualization method to the data, discussing why some non-body objects activate these neurons. § RELATED WORK Category-selective visual brain areas. The largest part of the body of work studying category selectivity in the brain is targeted towards face patches <cit.>. Several papers have come to the conclusion that face-selective neurons respond to visual features correlated with faces, rather than the semantic concept of a face. <cit.> generated maximally exciting images for face cells, which activated the neurons strongly but were not rated as face-like by human participants. <cit.> showed that face cells also respond selectively to pareidolia images, which are objects eliciting perception of a face in humans. Neurons still responded after the images were scrambled, destroying perception of a face, indicating that responses were driven by low-level features. Recently, <cit.> successfully predicted responses of face cells to face images, after training a linear readout model using solely non-face images. <cit.> proposed a unified view of IT cortex organization, arguing that selectivity was based on a small set of principal axes of image space. Reducing the dimension allows the authors to state that face cells prefer objects with high scores on principal components corresponding to 'stubby' and 'animate', whereas body-selective cells respond to 'spiky' and 'animate' objects. Recent work <cit.> challenges the hypothesis of shared coding principles between faces and non-faces in face cells, showing that computational mechanisms in these brain regions are far from being well understood. Our neurophysiological recordings add additional evidence to this debate. Highly relevant to our work is that of <cit.>, which visualized body cell responses to bodies and some objects by showing fragments of a highly activating image to determine most important image regions. This approach showed that a large proportion of cells respond to local body fragments. The main advantages of our computational approach are its high-throughput, allowing to visualize a large number of cells and objects simultaneously, and enhancing interpretability of features by analyzing them in the context of the preferred semantic class. Attribution methods. A substantial body of work exists on attributing behaviour of deep computer vision models to specific image regions. The most common goal is to understand why classification models output the observed class label, given an input image, meaning that attribution methods are often applied to the very last network layer. Several families of approaches have been proposed for tackling this problem: The first relies on image perturbations like occlusion <cit.>, the second is based on an analysis of latent activations <cit.>, and the third computes gradients of class activations w.r.t. pixel intensities <cit.>. More recent work <cit.> has shown that some of these methods rely too strongly on the input image itself, being independent of the network to varying degrees. Even though our reweighting scheme is compatible with any attribution method that is computable for latent activations, we therefore rely on vanilla backpropagation in this work, which has been shown to be sensitive to network weights. We provide results for additional attribution methods in the appendix. Further, relevant to our work is that of <cit.> which computes saliency maps for image similarity. The approach is similar to that of Grad-CAM, as it relies on analyzing feature activations before and after a global pooling layer at the end of the network. For that reason, the latter method is mainly suitable for visualizing global similarity as judged by a network trained to perform this task, rather than more local features relevant for biological neurons along the visual hierarchy in the primate brain. § METHODS We propose an approach for visual explanations of neural responses to out-of-category (ooc.) stimuli in category-selective visual brain areas. The categories currently of most interest in the high-level vision literature are faces and bodies, but the method can be applied to any semantic category. We leverage differentiable neuron models to explain high neural firing rates in response to an ooc. image by analysing visual similarity to an image from the preferred category which also drives the neuron. Formally, let ∈^3× h × w denote an ooc. image yielding high activity from a recorded category-selective neuron. The overall goal of this work is to determine the visual features of driving the neural response. We approach this problem in three steps: * We learn a linear readout vector w on top of a pretrained CNN f(·) to predict neural responses to within-category stimuli x_in,1, …, x_in,N. The learned vector then incorporates information about which features apparent in within-category images are relevant for the neural response. * We employ a neuron-specific similarity metric based on the learned readout to find a within-category image with similar visual features as . Visual inspection of this reference image on its own can yield insights on why x_out activates the neuron. * We backpropagate gradients of CNN activations to the pixels of both images and reweight them to highlight features that are * present in , * present in , * highly relevant for the neural response. Explicitly highlighting shared features helps identify specific image regions responsible for the images' neuron-specific similarity. §.§ Modelling neural reponses In recent years, a pretrained CNN backbone combined with a trainable linear readout module has become the gold standard in modelling the stimulus-driven variance in neural responses to images <cit.>. In these models, an image x∈^3× h × w is fed through a CNN f(·) up to a predetermined layer to obtain a latent representation a ∈^c, where c is the dimension of a commonly flattened tensor. The predicted response for a neuron is then computed as ŷ = ⟨ a, w ⟩, where w∈^c is a learned weight vector, and ⟨·,·⟩ denotes the standard dot product. The vector w encapsulates information about which visual dimensions the neuron is tuned to, as a large w^(i) implies that increased activity in feature i will strongly increase the predicted neural response. Conversely, w^(i) = 0 implies no change in predicted activity if feature i is manipulated and thus such features can be ignored when trying to understand a neuron's response pattern (assuming a perfect model fit). Therefore, training a model to predict neural responses to within-category stimuli, reveals which visual features drive responses within the category. If the model then generalizes to ooc. stimuli, we can infer that ooc. neural responses are driven by features present in both data distributions. §.§ Neuron-specific image similarity If an image strongly drives a neuron, we propose to interpret the relevant image features by studying a visually similar image from the category that the neuron is selective to. To quantify similarity, we adapt the common method of computing the cosine similarity of latent activations computed using a CNN <cit.>. Since we have additional information on which latent dimensions drive the neuron, we weight feature i by the corresponding weight w^(i). Formally, for images x_1, x_2 and their corresponding latent activations a_1 := f(x_1), a_2, we define the neuron-specific image similarity as s(x_1, x_2) := ⟨ a_1 ⊙ w, a_2 ⊙ w ⟩/||a_1 ⊙ w||_2 ||a_2 ⊙ w||_2, where ⊙ denotes the Hadamard product. Since w is often sparse, this formulation will effectively ignore most dimensions and focus solely on those that are highly relevant for predicting the neural response. For the downstream procedure of generating a visual explanation for the neural response to the image , we simply select the most similar image from our within-category dataset, i.e. = { s(, x): x ∈ D_c }, where D_c denotes the set of within-category images. §.§ Parallel backpropagation for shared-feature visualization Even though the images and ideally have a high similarity metric, the similar features are not always clearly identifiable. In some cases, the features might not be easily interpretable, or in other cases, several features might be shared upon visual inspection, such that it remains unclear whether all of them or a subset contribute to neural activity. In order to further highlight shared features contributing to model activity, we compute weighted gradients of the latent model activations w.r.t. to image pixels <cit.>. This method was originally for classification models to highlight those image pixels that most strongly influence a given class probability. We build on this work to visualize shared features between images by introducing a simple reweighting procedure. Specifically, we strengthen the influence of features that are shared, while attenuating features that are specific to only one of the images. For the sake of brevity, we illustrate the procedure for the gradients of x_out only, as the process is analogous for x_in. First of all, note that the gradient of the predicted neural response ŷ_out is given by ∂ŷ_out/∂ x_out = ∂/∂ x_out⟨ a_out,w⟩ = ∑_i w^(i)∂/∂ x_out a_out^(i). Due to the sum rule of calculus, the gradient of the predicted response is given by a weighted sum of the gradients of the latent features, with w^(i) acting as weight for feature i. Hence, features that are highly relevant for model predictions will dominate the gradient. In turn, those pixels for which an infinitesimal increase would strongly enhance one of the highly relevant features will be assigned a high intensity when plotting the gradient over the image. For our purposes, this gradient map is not satisfactory. First of all, it does not take the other image into account at all, and thus fails to leverage the images' similarity structure. Further, since the gradient weights do not carry information on whether a feature is present in the image in the first place, it is theoretically possible to assign high intensity to pixels that would strongly increase features with low activity. To remedy these deficiencies, we propose an adjusted pixel saliency map I based on replacing the weights for the features in (<ref>). Before reweighting the features, we first smooth and normalize each gradient to have unit norm, i.e. ||∂/∂ x_out a_out^(i)||_2 = 1. Smoothing alleviates pixel-level noise and allows the user to determine regions of pixels with high contribution. It has been shown that a post-processing smoothing step substantially improves the quality of gradient-based attributions <cit.>. Normalization ensures that the pixel saliency map is not determined solely by the gradient magnitude, as some features may be more sensitive to pixel perturbation than others. More importantly, it allows us to later bound the saliency map from above, which substantially improves interpretability. Subsequently, we weight each feature by an asymmetric variation of its contribution to the neuron-specific similarity metric given in (<ref>). More specifically, we define the weight for feature i by its contribution to s(·,·), given by β^(i) := a_in^(i) w^(i) a_out^(i) w^(i)/||a_in⊙ w||_2 ||a_out⊙ w||_2, and finally the pixel saliency maps I(x_out;x_in, w) := ∑_i β^(i)∂/∂ x_out a_out^(i), I(x_in;x_out, w) := ∑_i β^(i)∂/∂ x_in a_in^(i). The revised weight vector β imposes high weights only on those features that are highly activated in the feature vectors of both images (captured by a_1^(i) and a_2^(i)), and are also relevant for the neuron's activity (captured by (w^(i))^2). Dividing by the squared norm allows us to bound the intensity of the saliency maps by writing ||I(x_out;x_in, w)||_2 = ||∑_i β^(i)∂/∂ x_out a_out^(i)||_2 ≤∑_i ||β^(i)∂/∂ x_out a_out^(i)||_2 = ∑_i β^(i)|| ∂/∂ x_out a_out^(i)||_2_=1 = ∑_i β^(i) = s(x_in, x_out). Note that this inequality only holds if a_in^(i)≥ 0 and a_out^(i)≥ 0 for all i=1,…,c, as this is needed to ensure β^(i) > 0. However, this constraint is usually satisfied as latent activation are commonly fed through a ReLU layer before extraction. The inequality yields that the total intensity of the pixel saliency map (for either image) as given by its L_2 norm is bounded by the neuron-specific similarity. Thus if the images are dissimilar, the saliency maps will have low intensity. Finally, note here that the reweighting procedure is agnostic to the way the salience map for each feature is generated. Any (backpropagation) pixel-attribution method may be used, as long as each latent feature is assigned one saliency map with unity norm. Running these steps successively for one recorded neuron yields as output one ooc. image and one within-category image, along with one saliency map per image. To study shared features, we display the images side-by-side with the saliency maps overlayed, as seen in Figs <ref> and <ref>. § EXPERIMENTAL SETUP Model architecture. For experiments, we use a Resnet-50 <cit.>, which was adversarially trained on ImageNet <cit.>, as our CNN backbone. This architecture has shown good fits to neural data in several studies <cit.>. To predict the response to an image, we feed it through the Resnet up to the last ReLU of layer 4.1. Subsequently, a Gaussian readout <cit.> reduces the dimension of the Resnet activations from c × h_a× w_a to c by selecting a learned spatial location in the latent feature map from which to extract the final features. The receptive fields of the extracted features cover the entire foreground of the images. Finally, a fully-connected layer maps these features to neural responses. Stimuli. For gathering neural data, we utilized two sets of images. The first consisted of 475 images of a macaque avatar on a gray background (see Fig. <ref>, <ref>). The images were subsampled from a set of 720 images in which the avatar appeared in 45 unique poses extracted from nine different behavioural classes, each shown from 16 viewpoints <cit.>. The second set comprised 6,857 objects from varying categories shown on the same gray background and was combined from the OpenImages dataset <cit.>, as well as several smaller ones <cit.>. To test for body selectivity, we used an additional set of 2068 control body images including a variety of species. These images were only used to test for category-selectivity of the recorded cells. All other references to body images in the text refer to the original image set showing the monkey avatar. All stimuli were centered with respect to the fixation point. They were shown to the monkey at a resolution of 280x280, and were resized to 224 for the Resnet. Neural data collection. We recorded multi-unit activity (MUA) from and surrounding two fMRI-defined body category-selective patches <cit.> in the macaque superior temporal sulcus (STS), using 16-channel Plexon V probes, while the subjects performed a fixation task. Since body patch neurons typically discharge sparsely for non-body objects, we employed online stimulus selection: In a first phase, we recorded responses to the set of 475 monkey avatar images. We then trained a model for each channel to predict neural responses to these images. Subsequently, we predicted neural responses to our set of object images, and for each neuron we selected the highest and lowest predicted activator, as well as the object most similar to the top-activating body image, according to s(·,·). We followed the same procedure for the control body images. Finally, in a second experimental phase we recorded responses to these novel images as well as a subset of 75 of the original body images, to test recording stability. We included recording channels in the analysis if the test/retest stability was higher than .60, as measured by the correlation between responses to body images before and after model fitting. Further, we tested each channel for body-category-selectivity by comparing the median response to the selected objects and the selected control bodies using a Mann-Whitney U-test. Further experimental details are given in the appendix. Fitting the neuron model. We split the monkey image set into a training/validation/test split consisting of 400/50/25 images. The parameters of the Gaussian readout location, as well as the linear weights, were trained simultaneously using the Adam optimizer <cit.>, to minimize the mean squared error between recorded and predicted responses. We augmented the training data by incorporating silhouettes of the monkey, for which the tuning of body patch cells in mid-STS has been shown to be largely preserved <cit.>. Further, we penalized the readout-weights using L_1/2 regression to sparsify the vector while still allowing for some large weights. We set the learning rate to 10^-4 and the weight of the regularization to 0.1 After training for 2500 epochs, we selected the model with lowest loss on the validation set. Importantly, the model was not trained to predict responses to non-body images, meaning that it must utilize the same visual features to predict bodies and objects. Models and training runs, as well as the visualization procedure were implemented in PyTorch <cit.>. To compute the Jacobian of latent features for visualization, we utilized the corresponding functionality of PyTorch's autograd. All experiments were run on a single Nvidia RTX 2080Ti. § RESULTS §.§ Model generalizes from bodies to objects After training the models on body images, we first test how well they generalize to images of objects. Model performance in terms of correlation between predicted and recorded neural response is shown in Fig. <ref> a). 93.7 / 95.3 % of channels in the posterior/anterior region exhibit a significant positive correlation, suggesting an at least partially class-agnostic feature preference. Interestingly, strong performance on the held-out monkey images does not seem to be a necessary condition for strong performance on objects. This effect may be caused by body images being highly similar due to fine-grained sampling of poses and viewpoints, compared to high feature variance among objects. §.§ Feature visualization Shared features between objects and bodies. Having found that there exists a set of visual features driving neural responses to both objects and bodies, we apply the proposed visualization method to characterize these features. Results are displayed in Figs. <ref> (posterior region MSB) and <ref> (anterior region ASB). We show results from multi-unit sites for which model performance on the out-of-category images was high (r>0.4), selecting a subset representing a wide range of highlighted features. For neighbouring recording sites, visualizations are often similar, likely due to similar feature preferences. For each channel, we display the image pair with highest neuron-specific similarity among the top-5 activating objects and top-15 activating bodies. Fig. <ref> b) demonstrates that the visualized objects activate the corresponding channels more strongly than the average body/object image in most cases. The method discovers a variety of shared features between highly activating bodies and objects. Most of them correspond to parts of the body rather than the entire body, which is aligned with previous findings for neurons from MSB.<cit.>. In fact, specific object parts seems to bear resemblance to specific body parts in the model's latent space. For example, extended structures appear to activate the same latent dimensions as arms/shoulders, so a model that relies on arms/shoulders to explain responses to body images also predicts strong responses to other extended structures. We observe objects driving the model due to similarity to limbs, tails, heads, torsos as well as more diffuse features which are more difficult to interpret. Interestingly, while a a lot of the observed features could be characterized as 'spiky', we also find neurons which are activated by stubby objects. The corresponding bodies show crouched poses without protruding extremities. This demonstrates that at the single-image and multi-unit level, tuning properties are more fine-grained than previously suggested <cit.>. In some cases, we observe the same object with different highlighted features in different recording channels, indicating that objects may have multiple features that activate different neurons. An example of this can be seen in Fig. <ref>, where the highlighted features of the same image correspond to a leg and a torso of the effective body images of two different channels. Additional results are given in the appendix. Objects without shared features. We find that for some object images eliciting a high neural response, the visualization method yields no shared features with any of the top-activating body images (Fig. <ref> c)). This could be either due to neurons preferring features not present in the original body image set, or due to tuning properties not captured by the model. These cases demonstrate the utility of the bound for the norm of the saliency maps given in (<ref>), as the lack of image similarity is clearly reflected by the visualization. § DISCUSSION Limitations. The proposed approach is made possible through the use of a differentiable neuron model, which means that the visualization quality depends on the ability of the model to capture neural tuning properties. As models of visual processing further improve in the future, we predict that visualization quality will improve accordingly. Further, the visualizations will reflect idiosyncrasies of the underlying attribution method used for generating saliency maps for latent features. Since this backbone can be chosen freely, advances on the topic of attribution methods will also improve our visualizations. Conclusion. We presented a method for visualizing selectivity of class-selective visual feature detectors when confronted with out-of-class images. Further, we showed that body-selective neurons encode bodies and objects using an at least partially shared feature set. We visualized these features, providing an explanation for why some objects activate body-selective neurons. Future work could involve using the same method for other category-selective areas, like face patches. Finally, future work could test these visualizations by presenting highlighted fragments to the subject in a closed-loop fashion. The authors thank Rajani Raman and Prerana Kumar for valuable discussions about this project. Further, the authors thank C. Fransen, I. Puttemans, A. Hermans, W. Depuydt, C. Ulens, S. Verstraeten, S. T. Riyahi, J. Helin, and M. De Paep for technical and administrative support. AL, AB, GKN, AM, LM, MG and RV are supported by ERC-SyG 856495. MG is supported by HFSP RGP0036/2016, BMBF FKZ 01GQ1704. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Alexander Lappe and Lucas Martini. Parts of the stimulus images are due to courtesy of Michael J. Tarr, Carnegie Mellon University, http://www.tarrlab.org/. Contributions. AL, AB, MG and RV conceptualized the study. AL developed and implemented the visualization method. AB, GGN and RV collected the neural data. AL and AB analyzed the data. AM, AB, LM and AL contributed to stimulus generation. AL wrote the initial draft of the manuscript, and all authors contributed to the final version. MG and RV supervised the project. § APPENDIX / SUPPLEMENTAL MATERIAL §.§ Additional Results §.§.§ Synthetic data To test our method on a wider set of semantic categories, we generate six synthetic, category-selective neurons. To do so, we gather a small set of within-category images x_in,1, …, x_in,N and a set of out-of-category images x_out,1, …, x_out,M from the stimulus sets used for the electrophysiological experiments. Feeding these through the CNN and sampling a spatial location using a Gaussian readout with random location preference yields activation matrices A_in∈ℝ^N× c and A_out∈ℝ^M× c. A synthetic neuron with readout weights w∈ℝ^c, that on average prefers the within-category images can then easily be found by solving _w (A_inw)^⊺ A_inw - (A_outw)^⊺ A_outw = _w w^⊺ (A_in^⊺ A_in - A_out^⊺ A_out)w. This formulation is useful since the Rayleigh quotient is solved by setting w to be the first eigenvalue of A_in^⊺ A_in - A_out^⊺ A_out <cit.>. Of course, the Resnet also contains category-selective neurons that do not need to be constructed artificially. However, we aim to make the experiment as similar to neural recordings as possible, where model neurons are usually given as a linear readout of latent activations. We observe that the visualizations clearly mark features that are common among within-class images, showing why the ooc. stimuli drive the otherwise category-selective neurons. §.§.§ Integrated Gradients Our visualization procedure is based on a weighted sum of a tensor containing saliency maps for all latent features. As discussed in the main text, the method is agnostic towards how the individual saliency maps are generated. For the main experiments, we use the vanilla gradient method which results in reweighting the Jacobian of the latent features. Here, we experiment with using a different saliency backbone, namely Integrated Gradients <cit.>. For an image x, Integrated Gradients approximates the path integral of the gradients along the straightline path from an image of zeros to x. Originally developed for visualizing scalar outputs, we adapt the formula from <cit.> to the multi-dimensional case to yield IntegratedGrads(x) = x ⊙1/m∑_k=1^m Jf (k/mx), where Jf(x) denotes the Jacobian of the CNN features w.r.t. x. Since <cit.> found that this formulation is heavily influenced by the element-wise multiplication with the input x, we omit this step in the computation. Results shown in <ref> demonstrate that the visualizations are almost indistinguishable from those computed using vanilla backpropagation. §.§ Additional details for neural data collection Experiment details. We recorded neuronal responses during passive fixation task using a 2x2 degree fixation window (EyeLink 1000 infrared eye tracker sampling at 1000 Hz). Stimuli were shown gamma-corrected on a 22.5-inch ViewPixx monitor at a distance of about 57 cm, with a resolution of 1920x1080 and a refresh rate of 120 Hz. A set of 475 stimuli of a monkey avatar in various poses were presented 8 times in a pseudorandom sequence for 200 ms on a gray background, with a 250 ms interstimulus interval. During the interstimulus interval, only a fixation dot was present and monkeys could receive a brief juice reward . Stimulus onset and offset were indicated by a photodiode, detecting luminance changes synced with the stimuli, in the display corner (invisible for the monkeys). Control of stimulus presentation, event timing, and juice delivery was managed by an in-house Digital Signal Processing-based computer system, which also monitored the photodiode signal and tracked eye positions. Neuronal data was collected from and surrounding two fMRI-defined body-selective patches in the ventral superior temporal sulcus (STS) using 16-site linear electrodes (Plexon V probe) with Open Ephys acquisition board and software (sampling rate: 30000 Hz, filtered between 500-5000 Hz). Multi-unit activities were extracted using Plexon Offline Sorter, after applying a high-pass Butterworth filter with a cutoff frequency of 250 Hz. From the stimulus event synchronized continuous data, 550 ms trials were extracted, comprising 200 ms prestimulus and 150 ms poststimulus periods. Responsive (at least for one stimulus showing stronger than 5 spikes/s net responses (baseline: -75 to +25 ms, response: +50 to +250 ms)), and body-selective MUAs (p<0.01 Kruskal-Wallis test), with a split-half reliability >0.5 (Spearman Brown corrected) were selected. The responses of these MUA sites were used to predict neural responses to our set of object and body images, and for each neuron we selected the highest and lowest predicted activator, as well as the object most similar to the top-activating avatar stimuli, according to s(·, ·). Finally, in a second experimental phase we recorded responses to these object and body images as well as a subset of the original monkey avatar stimuli, to test recording stability (same experimental design as in the first phase). For all neurons considered in this work, we tested for body-selectivity by comparing the median response to bodies to the median response to objects using a Mann-Whitney U-test, considering only channels for which the test detected a significant difference. Animals and husbandry. Two male 7 years old rhesus monkeys (Macaca mulatta) contributed to this study. The animals were housed in enclosures at the KU Leuven Medical School and experienced a natural day-night cycle. Each monkey shared its enclosure with at least one other cage companion. On weekdays, dry food was provided ad libitum, and the monkeys obtained water, or other fluids, during experiments until they were satiated. During weekends, the animals received water along with a mixture of fruits and vegetables. The animals had continuous access to toys and other forms of enrichment. After fMRI scanning, we implanted a custom-made plastic recording chamber, allowing a dorsal approach to temporal body patches. In each animal, the location of the recording chamber was guided by the fMRI body localizer (https://pubmed.ncbi.nlm.nih.gov/36717042/) Surgery was perfomed using standard aseptic procedures and under full anesthesia. Both the animal care and experimental procedures adhere to regional (Flanders) and European guidelines and have been approved by the Animal Ethical Committee of KU Leuven.
http://arxiv.org/abs/2405.09903v1
20240516084950
Federated Learning for Misbehaviour Detection with Variational Autoencoders and Gaussian Mixture Models
[ "Enrique Mármol Campos", "Aurora González Vidal", "José Luis Hernández Ramos", "Antonio Skarmeta" ]
cs.LG
[ "cs.LG", "cs.DC" ]
Federated Learning for Misbehaviour Detection with Variational Autoencoders and Gaussian Mixture Models Enrique Mármol Campos, Aurora Gonzalez-Vidal José L. Hernández-Ramos and Antonio Skarmeta Enrique Marmol, Aurora Gonzalez-Vidal José L. Hernández-Ramos and Antonio Skarmeta are with the University of Murcia, Spain. E-mail: {enrique.marmol, aurora.gonzalez2, jluis.hernandez, skarmeta}@um.es Accepted XXX. Received YYY; in original form ZZZ ======================================================================================================================================================================================================================================================================================================= Federated Learning (FL) has become an attractive approach to collaboratively train Machine Learning (ML) models while data sources’ privacy is still preserved. However, most of existing FL approaches are based on supervised techniques, which could require resource-intensive activities and human intervention to obtain labelled datasets. Furthermore, in the scope of cyberattack detection, such techniques are not able to identify previously unknown threats. In this direction, this work proposes a novel unsupervised FL approach for the identification of potential misbehavior in vehicular environments. We leverage the computing capabilities of public cloud services for model aggregation purposes, and also as a central repository of misbehavior events, enabling cross-vehicle learning and collective defense strategies. Our solution integrates the use of Gaussian Mixture Models (GMM) and Variational Autoencoders (VAE) on the VeReMi dataset in a federated environment, where each vehicle is intended to train only with its own data. Furthermore, we use Restricted Boltzmann Machines (RBM) for pre-training purposes, and Fed+ as aggregation function to enhance model’s convergence. Our approach provides better performance (more than 80%) compared to recent proposals, which are usually based on supervised techniques and artificial divisions of the VeReMi dataset. * Unsupervised Federated Learning (FL) approach considering non-iid data distributions. * FL approach based on Gaussian Mixture Models (GMM) and Variational Autoencoders (VAE). * Use of Restricted Boltzmann Machines (RBM) to improve the convergence of VAE in FL. * Use of Fed+ as aggregation function to overcome non-iid scenarios-related issues. * A feasible approach to improving the results of recent works using the VeReMi dataset. Federated Learning, Misbehavior Detection, Variational Autoencoders § INTRODUCTION The application of Machine Learning (ML) techniques for cyberattack detection has attracted significant interest in recent years <cit.>. However, most of the proposed approaches are based on centralized deployments, which are intended to manage massive amounts of data from different sources. Therefore, all the data produced by such sources is typically shared through a data center. This scenario sets out several issues related to the delay associated with the centralized reasoning process that is usually carried out in the cloud, as well as privacy concerns, especially in the case of sensitive data. Indeed, recent works <cit.> state the importance of protecting clients' personal information, and the need to develop distributed ML approaches to cope with the problems of centralized systems in terms of limited communication bandwidth, intermittent network connectivity, and strict delay constraints <cit.>. These aspects have motivated researchers to move toward more decentralized ML learning frameworks <cit.>. In this direction, Federated learning (FL) was proposed <cit.> as a collaborative ML approach where the different data sources (or clients) train a common model, which is updated through an aggregator entity. Therefore, data is not communicated to any external entity and clients benefit from the training of other clients without sharing any information about their dataset. The server aggregates all the weights using an aggregation function and sends the result to the clients so that they continue their training using the updated information. An example of a cybersecurity problem that has recently gained significant interest is the well-known misbehavior detection in vehicular environments which usually refers to the detection of vehicles transmitting false information that cannot be detected by typical cryptographic mechanisms <cit.>. However, the current state of the art for misbehavior detection is usually based on supervised learning techniques <cit.> that need labelled datasets to train a model. This process might require human intervention becoming a very resource-intensive and time-consuming activity to have numerous labelled examples in order to achieve a proper generalization <cit.>. In the case of vehicular environments, the creation of such labelled dataset could be infeasible due to the impossibility to reproduce a real scenario. Furthermore, supervised learning techniques are not appropriate to detect zero-day attacks, which refer to previously unknown threats. Instead, the use of unsupervised learning techniques could be used to mitigate such concerns <cit.>. In particular, unsupervised learning is used to extract information on the data’s structure and hierarchy by using the data samples without the need for ground truth files. The extracted knowledge representation can be used as a basis for a deep model that requires fewer labelled examples <cit.>. In spite of the advantages provided by unsupervised learning, we notice that most of the proposed approaches for misbehavior detection are based on supervised techniques using unrealistic data distributions considering an FL setting <cit.>. Indeed, some of the recent works <cit.> are based on artificial divisions of vehicular misbehavior datasets, such as the Vehicular Reference Misbehavior (VeReMi) dataset <cit.>. To fill this gap, our work proposes a combination of clustering techniques and autoencoders (AE) for misbehaviour detection. On the one hand, clustering algorithms use unlabelled data to create clusters that achieve high inner similarity and outer dissimilarity, without relying on signatures or explicit descriptions of attack classes. In particular, we choose Gaussian mixture models (GMM) as clustering technique since it adds probabilities, therefore, removing the restriction that one point has to belong only to one cluster. Additionally, GMM-based clustering also performs better in several scenarios where other clustering methods do not provide suitable results <cit.>. On the other hand, we use a specific type of AE called Variational Autoencoder (VAE) <cit.>. The main goal of an AE is to reconstruct the input data and the variational version is based on a stochastic encoder to map the input to a probability distribution. Since they achieve great success in generating abstract features of high-dimensional data, the detection of abnormal samples increases significantly <cit.>. Additionally, we implement Restricted Boltzmann Machines (RBM), which represent a type of neural network (NN) used as pre-training layer for the VAE to foster convergence during the federated training process. To the best of our knowledge, this is the first work exploring the use of such techniques in the scope of vehicular misbehavior detection. Furthermore, we use the VeReMi dataset for evaluation purposes by considering a realistic FL setting where each vehicle is intended to train with its local data. In our system, the server (or aggregator) is hosted in the cloud, which will serve as a centralized hub for refining and updating misbehavior detection models by the aggregation of the data process carried out by the vehicles acting as clients. By using a cloud-based approach, the system can easily scale to accommodate a growing fleet of vehicles, making it a practical solution for large deployments. Indeed, it is intended to store a track record of detected misbehaviour attacks, and to provide real-time monitoring and analysis of model performance across the entire set of clients. We carry out an exhaustive comparison between regular AE and VAE, and unlike most existing FL-enabled misbehavior detection approaches, we use Fed+ <cit.> as the aggregation function to deal with non-iid scenarios <cit.>. Our contributions are summarised as follows: * Unsupervised FL approach based on VAE and GMM considering a realistic partition of the VeReMi dataset for misbehavior detection. * Use of RBMs to enhance the convergence of VAEs in the FL setting. * Application of Fed+ as aggregation function to improve the effectiveness of the approach in the presence of non-iid data distributions. * Comprehensive evaluation to demonstrate the effectiveness of the proposed approach and comparison with recent works using the VeReMi dataset. The structure of the paper is as follows: Section <ref> describes the main concepts and techniques used in our work. Section <ref> analyses other works related to the application of GMM and AEs, as well as the use of AEs considering FL settings. Then, in <ref> we provide a detailed description of our misbehavior detection system. Section <ref> describes the results of our proposed approach and compares it with recent works using the same dataset. Finally, Section <ref> concludes the paper. § PRELIMINARIES In this section, we describe the main operation of FL, as well as the ML models we used in our approach, including GMM, and VAE. §.§ Federated learning (FL) Traditional ML approaches are characterized by centralized deployments where different parties are forced to share their data to be analyzed. Such a scenario poses significant issues around the needs of network connectivity, latency, as well as compliance with existing data protection regulations. To overcome such problems, FL <cit.> was proposed as a decentralised approach to train a ML model ensuring that data are not shared. Indeed, Federated learning (FL) scenarios are usually represented by two main components: the central entity (or server), and the data owners (or clients). Communication between these parts is essential for the correct creation of the model. The key of FL is that clients are intended to create a model in a collaborative way by only training with their local data and sharing the model updates through the server. A visual description of the FL operation is displayed in Fig. <ref> where (1) clients train the model using their dataset. Next, (2) they send the weights resulting weights from the local training to the server, which uses an aggregation function (3) to update the clients' incoming weights. Finally, (4) the server sends the aggregated weights to the clients, which starts again another training round through (1). For aggregating the weights in the server, our approach is not based on the well-known FedAvg function <cit.>, but on the Fed+ <cit.>, which is able to provide better performance in non-iid scenarios compared to FedAvg <cit.>. The main characteristic of Fed+ is that parties are not forced to converge to a single central point, as explained below. Hence, the weights are uploaded following the following equation: W^k+1 = θ[W^k - ν∇ f_k(W^k)] + (1-θ)Z^k, where ν is the learning rate, θ is a constant between 0 and 1 that controls the degree of regularisation, and Z^k is the average of clients' weights. As the new weights are an average of local weights and global weights, this means that each party has different weights and does not depend on the fact that all clients converge to the same point. §.§ Gaussian mixture models A Gaussian mixture model (GMM) is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. Formally, a GMM of K components is a parametric probability density function represented as a weighted sum of K normal distributions, i.e.: p(x|μ_1,…,μ_K,σ_1,…,σ_K,π_1,…,π_K)=∑_K π_i 𝒩(μ_i,σ_i) where μ_i are the means, σ_i the variances, π_i the proportions weights (which sum one and are positives), and 𝒩 is a Gaussian with specified mean and variance. A Gaussian distribution is completely determined by its covariance matrix and its mean. The covariance matrix of a Gaussian distribution determines the directions and lengths of the axes of its density contours, all of which are ellipsoids. There are 4 types of covariance matrix: full, tied, diagonal, and spherical. Full means the components may independently adopt any position and shape, tied means they have the same shape, but the shape may be anything, diagonal means the contour axes are oriented along the coordinate axes, but otherwise the eccentricities may vary between components, and spherical is a "diagonal" situation with circular contours. The purpose of using GMMs is the division of the dataset observations into K groups or clusters. Due to their probabilistic nature, GMMs differ from other clustering methods such as K-means <cit.> in the fact that an observation can belong to more than one cluster. More specifically, GMMs estimate the likelihood of every observation belonging to each component and assign the observation to the most likely one. In our work, we exploit these probabilities to classify samples as anomalous or not anomalous. Additionally, GMMs present other advantages such as adapting to the shape of the clusters, handling missing data, and taking into account the variance of the data. These properties are described in depth in <cit.>. §.§ Variational autoencoder (VAE) An autoencoder (AE) <cit.> is a particular case of NN where the input and output dimensions coincide, and its layer structure is symmetric. AEs are used to build models by using unlabelled data; therefore, they are an example of unsupervised learning techniques. An AE consists of an encoder and a decoder, as shown in Fig. <ref>. The encoder takes the input data and compresses it through the hidden layers into a lower dimension code, called latent space (Z). Then, the decoder reconstructs this latent space into its original state, that is, the input data. The main goal of AEs is to create a copy as close as possible to the original data. The difference between the input and output is called reconstruction error, which is usually measured using the Root Mean Square Error (RMSE). In a scenario for misbehavior or attack detection, training the model with only benign data will produce a high reconstruction error when attack data is passed. Hence, a threshold is set to decide which samples are benign or could be considered as an attack. A Variational AE (VAE) <cit.> is a type of AE whose encoding distribution is regularized, i.e., it approaches a standard normal (or gaussian) distribution during the training to ensure that its latent space is able to generate new data <cit.>. The main difference between a VAE and an AE is that the VAE is a stochastic generative model that provides calibrated probabilities, while a simple AE is a deterministic discriminative model without a probabilistic foundation. AEs are not suitable to generate new samples so it can cause the decoder to provide misleading results since all the information condensed in the latent space is disorganised. For that reason, the VAE is intended to map the encoder distribution q(Z|X) to a standard normal distribution for providing more order in the latent space. For this purpose, the reparameterization trick is carried out. From this distribution q(Z|X), it takes its mean Z_μ and standard deviation Z_σ. With these two latent variables, a sample is created to the latent variable Z that is sent to the decoder in order to create the predicted output X̂. This method also ensures that the network can be trained using backpropagation. Once the VAE has been described, the loss function has to be set. The main goal of VAE is to create a copy of an input vector. Hence, the RMSE has to be minimized. At the same time, as said previously, the model distribution needs to be as close as possible to a standard gaussian. For this, the Kullback–Leibler Divergence <cit.> (KL-divergence) function is used. This function measures how similar two distributions are. Hence, this difference between the encoder distribution and a standard distribution using the KL-divergence has to be as close as 0 as possible. Therefore, the loss function of VAE is: Loss(X,X̂) = Loss_RMSE(X,X̂) + KL(q(Z|X),𝒩(0,1)) One of the challenges associated with training AEs is the issue of convergence, especially when starting from different initial states. This problem becomes even more visible in a Federated Learning (FL) setting, where non-iid data distributions are prevalent among the participating devices or clients. To alleviate the convergence issues, researchers have turned to Restricted Boltzmann Machines (RBMs) as a promising solution. RBMs have gained recognition in the literature for their role in initializing AEs effectively, as proposed by Hinton and Salakhutdinov in their work <cit.>. RBMs are a type of stochastic neural network with a distinctive two-layer architecture. They are characterized by symmetric connections between the layers and the absence of self-feedback loops. Importantly, RBMs exhibit full connectivity between the two layers while having no connections within a layer. The two layers in an RBM are referred to as the visible layer and the hidden layer. The visible layer contains the original input data, while the hidden layer captures higher-level representations and features. This architecture makes RBMs well-suited for various applications, including feature extraction, pattern recognition, dimensionality reduction, and data classification. The energy function of an RBM is defined as: E(𝐯, 𝐡) = -∑_i=1^N_v∑_j=1^N_h w_ij v_i h_j - ∑_i=1^N_v a_i v_i - ∑_j=1^N_h b_j h_j Where: * 𝐯 is the visible layer's binary state vector with N_v neurons. * 𝐡 is the hidden layer's binary state vector with N_h neurons. * w_ij represents the weight connecting visible neuron i to hidden neuron j. * a_i is the bias of the visible neuron i. * b_j is the bias of the hidden neuron j. The joint probability of a configuration (𝐯, 𝐡) in an RBM is given by the Boltzmann distribution P(𝐯, 𝐡) = e^-E(𝐯, 𝐡)/Z. Where: - Z is the normalization constant (partition function) that ensures the probabilities sum up to 1 over all possible configurations. The conditional probability of the hidden layer given the visible layer is defined as P(𝐡 | 𝐯) = ∏_j=1^N_h P(h_j | 𝐯). And similarly, the conditional probability of the visible layer given the hidden layer is P(𝐯 | 𝐡) = ∏_i=1^N_v P(v_i | 𝐡). Where the conditional probabilities for binary units are typically sigmoid functions: P(h_j = 1 | 𝐯) = σ(∑_i=1^N_v w_ij v_i + b_j) P(v_i = 1 | 𝐡) = σ(∑_j=1^N_h w_ij h_j + a_i) Here, σ(x) is the sigmoid activation function: σ(x) = 1/1 + e^-x In this context, we adopt a similar strategy as outlined in previous studies <cit.>. Specifically, we employ RBMs for pre-training Variational Autoencoders (VAEs). This pre-training step helps the VAE initialize its parameters in a way that is more likely to converge effectively during subsequent fine-tuning. By leveraging RBMs for this purpose, we aim to enhance the overall robustness and convergence behavior of AEs within the FL paradigm, ultimately contributing to the successful deployment of federated machine learning systems in non-iid data environments. § RELATED WORK The use of ML techniques for misbehavior detection has attracted significant attention in recent years <cit.>. However, as previously described, most of the proposed works rely on centralized approaches using supervised learning techniques. In this direction, <cit.> uses six different supervised techniques along with plausibility checks to come up with a multiclass classification approach on the VeReMi dataset. Using the same dataset, <cit.> also discusses the use of different supervised techniques considering various feature extraction methods. Additionally, <cit.> uses an optimized version of Random Forest (RF), which is compared with other techniques such as K-Nearest Neighbors (KNN) and Decision Trees (DT). As an alternative to centralized ML approaches, recently the use of FL has been also considered in the scope of misbehavior detection for vehicular environments. Indeed, <cit.> proposes a federated approach based on neural networks to build a multiclass classification approach for detecting the attacks contained in the VeReMi dataset. Furthermore, <cit.> proposes a federated approach using blockchain where several supervised learning techniques are tested on an extended version of such dataset <cit.>. In the case of <cit.>, the authors propose an approach based on a semi-supervised model using a neural network and a subset of labeled data for the initial training phase. Then, new unlabeled data is used to improve the effectiveness of the system. The approach is also applied to the extension of the Veremi dataset. Unlike previous approaches, our work offers an unsupervised approach for detecting misbehavior using a model based on the combination of VAEs and GMM. Although this approach has been scarcely considered in this field, it has been widely used for detecting different types of attacks and anomalies. In this direction, a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection is proposed by <cit.>. DAGMM consists of a deep AE to reduce the dimensionality of input sample, and a GMM that is fed with the low-dimensional data provided by the AE. Furthermore, <cit.> describes an approach based on AE and GMM in which the objective functions’ optimization problem is transformed into a Lagrangian dual problem. Indeed, authors make use of the Expectation-Maximization (EM) algorithm. Like in the previous work, AEs are used for input’ dimensionality reduction. Then, an estimation network is intended to estimate samples’ density, so that samples with density higher than a certain threshold are identified as outliers. Moreover, the use of VAEs and GMM is considered by <cit.>. The VAE first trains a generative distribution and extracts reconstruction-based features. Then, the authors use a deep belief network to estimate the mixture probabilities for each GMM’s component. Based on these results, the GMM is used to estimate sample densities with the EM algorithm. While previous works are based on centralized settings, other recent approaches were proposed considering the use of VAEs/AEs and GMM on a FL scenario. Similar to the approach made by <cit.>, however, in the context of encrypted aligned data, the recent work <cit.> use VAE for reducing the dimension of the data, and then GMM is applied for clustering the data and selecting the correct samples. As we will see in section <ref>, the application on GMM without implementing VAE during the classification will lead to misclassification to a certain types of samples. Looking at works using AEs, a federated version of the previously described approach DAGMM is proposed by <cit.>, so that each client is intended to share the AE’s updated weights in each training round. Other works, such as <cit.> also use AEs in a federated scenario considering different contexts, such as wireless sensor networks. Like in the previous case, anomaly detection is based on the reconstruction error, so that samples with higher values are considered anomalies. Moreover, <cit.> proposes MGVN that represents an anomaly detection classification model using FL and mixed Gaussian variational self-encoder, which is built by using a convolutional neuronal network. In general, our analysis of existing literature reflects a lack of approaches applying unsupervised learning techniques for misbehavior detection and other cybersecurity-related problems. Furthermore, the described works are based on similar approaches in which AEs are used initially and the resulting output feeds the GMM. In our case, we initially train the GMM with the input data, and the results are used by the VAE to carry out a federated training among the vehicles acting as FL clients. To the best of our knowledge, this is the first work considering the use of GMM and VAEs in the contest of vehicular misbehavior detection. Furthermore, unlike existing works applying ML techniques in this context, our approach considers a non-iid data distribution scenario in which each vehicle trains on its local data. To mitigate the impact on non-iid in the approach’s performance, we further apply an approach to balance the dataset and use Fed+ as the aggregation function as described in the following sections. § PROPOSED MISBEHAVIOR DETECTION SYSTEM This section provides a detailed description about our unsupervised FL-enabled misbehavior detection approach. Furthermore, we describe the dataset used and how it is preprocessed below. §.§ Dataset and preprocessing For evaluation purposes, we use the VeReMi <cit.> and VeReMi extension <cit.> datasets. On one hand, VeReMi is a simulated dataset that was created using VEINS and the Luxembourg SUMO Traffic (LuST) scenario <cit.>. On the other hand, VeReMi extension make use of Framework For Misbehavior Detection (F²MD). F²MD is a VEINS extension that enables the recreation and detection of various misbehavior detection use cases. LuST is a traffic simulation scenario validated with real driving data. VeReMi utilizes a subsection of the LuST scenario. VeReMi dataset was created by simulating 225 scenarios considering 5 position forging attacks, 3 vehicle densities (low, medium and high), and 3 attacker densities (10, 20 and 30 percent), and each parameter set was repeated 5 times for randomization. Finally, the dataset contains the log messages of 498 vehicles. For VeReMi extension, it uses subsection of the LuST network with a size 1.61 km² and a peak density of 67.4 Veh/km². Furthermore, VeReMi extension contain two vehicle densities, one in the rush hour time (7h-9h), and another in low traffic time (14h-16h). Each file maintains a record of Basic Safety Messages (BSMs) received by a single vehicle (same ID) from neighbouring vehicles (300-meter range) during its entire journey. The files are converted to CSV format differentiating between benign and attack samples. Furthermore, we carry out a pre-processing of the VeReMi dataset to deal with the impact of non-iid data distributions on the effectiveness of the misbehavior detection approach. The main reason is that our approach (unlike other recent works, such as <cit.>) considers each vehicle as an FL client using its own data during the federated training process. Indeed, as shown in our previous works <cit.>, non-iid data distributions might have a dramatic impact on the effectiveness of the FL-enabled system. For this purpose, we use SMOTE-Tomek <cit.>, which represents a combination of oversampling and undersampling techniques to balance the class ratio. The new samples in the dataset are created through SMOTE by making linear combinations between close points of the less-represented class. Then, Tomek Links is in charge of removing neighbour points that belong to different classes in order to make the classes more differentiable. Initially, the balance of the clients' dataset is 75% benign-25% attack. After applying SMOTE-Tomek, the balance is nearly 50-50 in all clients. Finally, as explained in section <ref>, GMM are a sum of normal distributions. Therefore, the dataset has to follow a Gaussian distribution. In this sense, we normalise the dataset and check whether it fits to a Gaussian distribution by applying the Shapiro-wilk test <cit.>. After applying the Shapiro-wilk test we have a Shapiro-wilk score of 0.98 and p-value of 0.99>0.05, meaning that the dataset follows a Gaussian distribution. §.§ System description After describing the dataset and the pre-processing we carry out, below we provide a detailed explanation of the proposed system, including the different phases of the misbehavior detection approach. §.§.§ Overview A global overview of our system is shown in Fig. <ref>, which includes the relationship between the different components and techniques previously described. As already mentioned, vehicles act as FL clients performing local training by using the VeReMi dataset and the global model that is updated in each training round. In particular, our approach consists of 3 phases. Firstly, during Phase 1 (Initialization), clients train the GMM with their benign dataset. Then, this data is transformed into histograms that are used to create the initial weights for training the VAE by using the RBMs. For each sample, these histograms measure which features are within each group created by the GMM. Once each client has initialized its corresponding VAE, Phase 2 (Federated learning) is carried out by following the steps described in Section <ref>, so that each client performs a local training by using its own data in each training round. The server implements an aggregation method to aggregate the local results from the FL clients, and the resulting aggregated model is sent again to the FL clients to be updated through a new training round. While the vehicles themselves act as FL clients, the server is intended to be deployed on the cloud. Therefore, FL is employed to detect cyberattacks locally in these vehicles, and the server in the cloud can then aggregate data from various vehicles and further analyze it to gain a holistic view of the cybersecurity landscape. It should be noted that this cloud service could be also considered to store information about vehicles that were identified as misbehaving entities by our detection system, so that it allows the system to keep track of potential malicious vehicles. In Phase 3 (Local misbehavior detection), after the federated training process is finished, each client has a trained VAE to be used for classification purposes in order to detect potential misbehavior. In particular, the GMM's likelihood function <cit.> is used to evaluate whether a point belongs or not to one of the groups previously created in Phase 1. Depending on the likelihood value, the sample is considered a benign sample (greater than 1) or an attack (0). In case of a value between 0 and 1, a histogram is created for that sample, and the local VAE is used to classify it depending on a certain threshold value. These processes are further detailed in the following subsections. §.§.§ Model training As previously mentioned, the model training process is split into two main phases: Initialization and Federated learning. Before starting this process, it should be noted that, for each client, we pre-process its corresponding dataset as described in subsection <ref>, and we use the 80% of the benign dataset to train the GMM with the number of components previously optimised. Toward this end, we use the Silhouette coefficient <cit.>, which represents a metric to calculate the performance of a particular clustering by examining how separated and compact the clusters are. This optimization will lead to each client will have different numbers of components, as it will be explain in section <ref>, due to this number of components define the structure of the FL model, the clients are divided in groups regarding their number of components. In this direction, although there will be several gruops, the model training scheme will be the same in each case. The model training process is further detailed in Algorithm <ref>. Firstly, in Phase 1, for each client n, GMM is used to divide the benign dataset B_n as many groups G_n=(g_n^k)_1^K_n as components K_n are previously computed (line 2). Then, the data is transformed into histogram vectors <cit.> (line 3), which are used in our previous works <cit.>. This technique creates a matrix H_n containing the vectors h^i = (h^i_1,h^i_2,…,h^i_K_n) (line 21), where K_n is the number of groups (equal to the number of GMM components), and each h^i_k is the number of features of sample i that are within each group g^k_n divided by the total number of features (line 15). To calculate it, for each feature j of sample i we check whether the value of this feature is within the dimension j of the cluster g_n^k (lines 12-14), i.e, whether is between the dimension j of the center of this cluster g_n^k minus the standard deviation and the dimension j of the center of the cluster g_n^k plus the standard deviation. The standard deviation means the standard deviation of the points of g_n^k Furthermore, it should be noted that function Dim_j(x) is only intended to take the coordinate j of point x. The resulting histogram H_n is used by the RBM to pre-train the VAE in order to get the initial weights W^0_n (lines 4-5) for the next steps. In Phase 2, the FL process is carried out by the different clients which have the same number of components using the local VAE models (line 7), whose input dimension is the number of components previously set. Each client is intended to train locally its corresponding VAE model and share the resulting weights in each training round (lines 22-32) with the server, as previously described in Section <ref>. Once this process is complete after a certain number of rounds, we set the threshold as: th = mean(RE)+0.01*std(RE) where mean(RE) and std(RE) are the mean and standard deviation of the VAE's reconstruction error (RE) respectively (line 10). We use this formula since it provides a value close to the 95-percentile. §.§.§ Local misbehavior detection After the previous steps are complete, each vehicle is equipped with a model to detect potential misbehavior detection (Phase 3 in Figure <ref>) based on the classification of samples during the testing process of the model. For this purpose, each vehicle employs the portion of benign data that was not used for training (i.e., 20%), as well as the same size of attack data. The process is detailed in Algorithm <ref>, which is based on the calculation of the likelihood function associated with the GMM for each sample of a vehicle's testing dataset (line 2). If this value is greater than or equal to 1 (line 3), it means that such sample belongs to one of the groups created by the GMM, so it is classified as benign (line 4). If it is equal to 0 (line 5), it does not belong to a GMM group, so it is classified as anomalous (line 6). Then, the samples whose value is between 0 and 1 (line 7) could represent attacks mirroring normal traffic <cit.>. These samples are transformed into the histograms (line 8) as previously described. Then, the VAE that was trained in the previous phase is applied to reconstruct the histogram (line 9); if the reconstruction error is higher than the threshold (line 11), it means that it is anomalous (line 12), otherwise, such sample is considered as benign data (line 14). § EVALUATION For the evaluation of our misbehaviour detection approach, we employ a virtual machine with processor Intel(R) Xeon(R) Silver 4214R CPU @ 2.40GHz with 24 cores and 196 GB of RAM. Furthermore, we use Flower <cit.> as the FL implementation framework, the version 0.18. Flower provides an end-to-end implementation and the possibility to customise several aspects of the model for performing a more exhaustive evaluation. All the parameters related to our federated scenarios are summarised in Table <ref>. Furthermore, it should be noted that we make the code and the dataset used publicly available for reproducibility purposes[<https://github.com/Enrique-Marmol/Federated-Learning-for-Misbehaviour-Detection-with-Variational-Autoencoder-and-Gaussian-Mixture-Mode>]. §.§ Client division The use of GMMs requires the specification of a certain number of components (clusters) before training the model. Furthermore, such number will also determine the input size of the AE/VAE. Instead of selecting a fixed number of components for all the clients, we compute the optimal number of components for each client using silhouette analysis as stated in Section <ref>. Consequently, clients are grouped based on their optimal number of components since different networks with different input sizes will be created (one per group). In order to apply the GMM, we have to specify the type of the covariance matrix. In our case, we choose the covariance type which fits better to our model, which is diagonal. Fig. <ref> shows the number of clients/cars that resulted in having a certain number of components as optimal. As shown, the most common number of components was 300. Hence, we will have as many federated scenarios as groups of cars are created based on their optimal number of components. As will be described in Section <ref>, we provide an extensive analysis for the group of clients whose optimal number of GMM components is 298. Such analysis is also applicable for each group of clients with the same number of components. In this sense, in order to avoid repeating the process 104 times (as it is the total number of groups), we also provide the final results for the rest of the groups with 5 or more clients (Section <ref>). The reason for choosing this grouping is to display a better picture comparing all clients' performance. Although the group with 298 components has fewer clients compared to the clusters of 300 and 299 components (20, 93, and 45 clients respectively), the clients' average sample size is similar between those groups (20453, 19983, and 21004 respectively), as well as the standard deviation (7677, 9760, and 10885). Therefore, the clients of this particular case have a similar distribution, and having a small number of clients can ease results' interpretation and plotting. Moreover, our analysis includes a comparison of the performance regarding the autoencoder process depending on the learning rate (lr) using an AE or VAE. It should be noted that both AE and VAE are built as similarly as possible to ease their comparison. Specifically, the input layer's size is the number of GMM components. In the encoder, the first hidden layer is half the size of the input layer. Then, the AE's latent space and the VAE's sample layer are third of the size of the input layer. Finally, the decoder has two layers: the first one is half of the size of the input layer, and the output layer has the size of the input layer. All these layers use ReLU as the activation function except the last one, which employs the sigmoid function, and both use RMSprop as the optimizer. For the rest of the clusters, we compare the results considering AE and VAE with the best lr. §.§ Particular case analysis: Clients with 298 components As shown in Fig. <ref>, 20 clients are grouped with 298 components. In this section, we analyse the impact of choosing between AE or VAE in the federated setting, and we also compare the results with the distributed scenario where clients only train locally, i.e., they do not share any result derived from such training with the server. It should be noted that we distinguish between AE/VAE accuracy (autoencoder part), the accuracy obtain during the application of AE/VAE in phase 3 in <ref>, and the general model's accuracy (total accuracy), which is the accuracy obtained based on the use of GMM and AE/VAE. The reason is that GMM process is nearly the same every round as it is not part of the federated training, so we analyse the impact of the AE/VAE model in the performance of the model. Table <ref> shows the accuracy values of the autoencoder part (that is, using AE/VAE) considering different lr, marking in bold the best lr for each method. According to the results, we best value obtained is for VAE when lr=0.05. In particular, considering the best case for each setting, the federated VAE reaches a value of 0.972 (lr = 0.05), whereas the distributed VAE, federated AE, and distributed AE reach 0.928 (lr = 0.001), 0.923 (lr = 0.005), and 0.936 (lr= 0.005) respectively. Comparing the performance in particular of the best cases of federated VAE and federated AE, in Fig. <ref>, we compare the accuracy of each client specifically. In this figure, we see that in almost all clients, the VAE reaches better performance than the AE. Having set the best lr for the VAE, Fig. <ref> shows the final metrics of the general model, that is, considering the metrics obtained by applying GMM and the VAE. The accuracy of the model is 0.824, and the recall, precision, and f1-score are 0.968, 0.672, and 0.775 respectively. In order to justify the grid of GMM components chosen in Section <ref>, Table <ref> provides a comparison of the time consumed by the principal processes of our approach depending on the number of the GMM components and the accuracy achieved. As the number of components increases, the time required is higher, especially because of the silhouette analysis. Nevertheless, although the accuracy also grows, the increase of each step is getting lower, from 0.62 to 0.824. Hence, we set the maximum number of components of the grid at 300, since a wider range will consume an enormous amount of time for a reduced improvement in terms of accuracy. Finally, we also compare our approach with a scenario where the GMM and histograms are not used. In this case, the VAE is applied as a baseline employing the benign dataset, so potential misbehavior is detected using the reconstruction error. This method reaches 0.587 of accuracy, which supports the need to use the combination of GMM and histograms with the VAE. §.§ General case: clusters with 5 or more clients This section shows the results for all clusters (any number of components) of Fig. <ref> that contain 5 or more clients. Fig. <ref> and Fig. <ref> provide the accuracy of the general model and the autoencoder part respectively, comparing when AE or VAE is used. For each case, we select the best lr. As shown, the results using the VAE are higher compared to AE for almost every cluster. For the sake of completeness, we show the comparison of our method using FedAvg as aggregation function in phase 2 mentioned in section <ref>. In Fig. <ref> and <ref> show the total accuracy of the VAE and GMM, and the accuracy of the VAE using either Fed+ or FedAvg. The lr used in these figures are the same of the ones used in Fig. <ref> and <ref>. In these pictures, we can clearly appreciate that in all clusters FedAvg achieves worse performance than the case with Fed+. Furthermore, we compare our evaluation results with recent works on the same dataset. In particular, <cit.> uses several supervised ML algorithms, including Support Vector Machines (SVM) K-Nearest Neighbour (KNN), Naïve Bayes, and Random Forest. In general, we achieve similar results but with clearly higher values for certain metrics, such as recall and f1-score in which they achieve 0.5 and 0.6 respectively, whilst we get 0.96 and 0.77, respectively. Furthermore, it should be noted that the mentioned work achieves those results using supervised learning approaches, and an artificial division of the VeReMi dataset, which is split into five equally balanced clients, an iid division that not reflect the real-case scenarios. In our approach, clients' data are based on the vehicles in the VeReMi dataset, so that each vehicle only trains with its local data. In this sense, our division corresponds a more realistic scenario, which is characterized by non-iid data distributions. We also compare our unsupervised FL approach with the case of using a supervised MLP, which provides an accuracy value of 0.88. It should be noted that even in this case with non-iid data, the accuracy value is higher compared with the work previously mentioned. The main reason is that we mitigate such aspect by using SMOTE-Tomek to obtain a more balanced dataset, and Fed+ as aggregation function. § CONCLUSIONS AND FUTURE WORK This work proposed an unsupervised FL approach for misbehavior detection in vehicular environments based on the use of GMM and VAE. The resulting system was exhaustively evaluated considering a realistic scenario where each FL client is intended to train with its local data. Unlike most of existing approaches, we deal with non-iid data distributions and convergence issues by considering dataset balancing techniques, as well as alternative aggregation functions. Our evaluation also measures the impact of the learning rate on the federated training process, and compare the approach with recent works using the VeReMi dataset. According to the results, our proposed system provides a performance close to supervised approaches, which require labelled datasets for training. As part of our future work, we plan to build VAEs to identify different classes of misbehavior, that is, beyond binary classification. Furthermore, as the choice of the learning rate could have a significant impact on the performance, we will design an approach to dynamically set the learning rate throughout the training rounds. Additionally, we also plan to analyze the possibility of dynamically selecting specific clients to reduce the bandwidth required during the federated training process. § ACKNOWLEDGEMENTS This work has been sponsored by the EC through H2020 ERATOSTHENES (g.a. 101020416) and the European Union’s Horizon Europe research and innovation funding programme under the Marie Skłodowska-Curie grant agreement No 101065524. It also forms part of the ThinkInAzul programme and was supported by MCIN with funding from European Union NextGenerationEU (PRTR-C17.I1) by Comunidad Autónoma de la Región de Murcia - Fundación Séneca as well as by the HORIZON-MSCA-2021-SE-01-01 project Cloudstars (g.a. 101086248) and ONOFRE Project PID2020-112675RB-C44 funded by MCIN/AEI/10.13039/501100011033. IEEEtran
http://arxiv.org/abs/2405.10085v1
20240516132918
Impact of medium temperature heat treatment on flux trapping sensitivity in SRF cavities
[ "Pashupati Dhakal", "Bashu Dev Khanal", "Eric Lechner", "Gianluigi Ciovati" ]
physics.acc-ph
[ "physics.acc-ph", "cond-mat.supr-con" ]
Impact of medium temperature heat treatment on flux trapping sensitivity in SRF cavitiesThe work is partially supported by the U.S.Department of Energy, Office of Science, Office of High Energy Physics under Awards No. DE-SC 0009960 and The manuscript has been authored by by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. P. Dhakal1,2dhakal@jlab.org, B. D. Khanal2, E. Lechner1, and G. Ciovati1,2 1Thomas Jefferson National Accelerator Facility, Newport News, VA 23606, USA 2Center for Accelerator Science, Department of Physics, Old Dominion University, Norfolk, VA 23529, USA ===================================================================================================================================================================================================================================================================================================================================================================== The effect of mid-T heat treatment on flux trapping sensitivity was measured on several 1.3 GHz single cell cavities subjected to vacuum annealing at temperature of 150 - 400 ^∘C for a duration of 3 hours. The cavity was cooldown with residual magnetic field ∼0 and ∼20 mG in the Dewar with cooldown condition of full flux trapping. The quality factor as a function of accelerating gradient was measured. The results show the correlation between the treatment temperature, quality factor, and sensitivity to flux trapping. Sensitivity increases with increasing heat treatment temperatures within the range of (200 - 325 ^∘C/3h). Moreover, variations in the effective penetration depth of the magnetic field and the density of quasi-particles can occur, influencing alterations in the cavity's electromagnetic response and resonance frequency. § INTRODUCTION The current leading technique in processing superconducting niobium surfaces for SRF cavities involves surface engineering through impurity alloying via thermal treatment. This method, considered state-of-the-art, takes advantage of the introduction of impurities from external sources such as titanium or nitrogen, or the presence of native niobium oxides within the rf penetration depth <cit.>. However, impurities play a crucial role in increasing the density of pinning centers, enhancing susceptibility to flux trapping during cavity cooldown due to an incomplete Meissner effect. To optimize the process, it is essential to minimize flux trapping sensitivity, resulting in a higher quality factor and an increased accelerator gradient. A recent advancement in cavity heat treatment, known as "mid-T" baking, capitalizes on the naturally formed oxide on the surface of the Nb cavity <cit.>. This oxide is then dissolved in the bulk material. The diffusion of oxygen is temperature and time-dependent, influencing the concentration of interstitial oxygen. Developing processes with lower flux trapping sensitivity is paramount for achieving higher quality factors and accelerator gradients. In this manuscript, we present the results of the several SRF cavity tests subjected to mid-T heat treatment in the furnace. The quality factor as a function of accelerating gradient as well as the flux trapping sensitivity was measured. Additionally, the change in resonant frequency near the critical temperature T_C was measured to confirm the change in electronic properties as a result of heat treatment. § EXPERIMENTAL SETUP Two 1.3 GHz TESLA shaped single cell cavities, namely TE1-05 and TE1-06 were used for this current study. The cavities were previously heat treated at higher temperature (1000 ^∘C) followed by 25 μm electropolishing to ensure that the cavity provides good flux expulsion <cit.>. Before mid-T heat treatment, the cavities were high pressure rinsed with de-ionized water, dried in a class 10 clean room, installed clean Nb foils over the beamtube flanges. The furnace temperature was increased at a rate of ∼ 2-5 ^∘ C / min until it reached the target temperature. The furnace was held at the desired temperature for 3 hours and cooled to room temperature. The cavities were subjected to standard cavity processing techniques of high pressure rinse and assembly. Three flux gate magnetometers were taped at the equator 120^∘ apart each and parallel to the cavity axis. The cooldown of cavity was done by maintaining residual magnetic flux density of ∼0 and ∼20 mG in the Dewar with cooldown condition of full flux trapping. The RF measurements consist of Q_0(T) at low peak rf field B_p∼ 15  mT and Q_0(B_p) at 2.0 K. Furthermore, the resonant frequency and quality factor were measured during the cavity warm up through T_c. After each rf test, the cavity's rf surface is reset by ∼25 μm electropolishing, before next heat treatment. § EXPERIMENTAL RESULTS §.§ rf results The R_s(T) data were fitted using the model presented in Ref. <cit.> to extract the temperature-dependent R_BCS and temperature-independent residual resistance R_i. The flux trapping sensitivity is calculated using, S = R_i,B_2-R_i,B_1/B_2-B_1. where B_1 ≈ 0 mG and B_2 ≈ 20 mG. Figure <ref> shows the flux trapping sensitivity as a function of heat treatment temperature. Sensitivity increases with the increase in the temperature of heat treatment with the maximum at ∼ 250 ^∘ / 3h. On further increase in temperature the flux trapping sensitivity decreases gradually. The flux trapping sensitivity follows the oxygen concentration within 100 nm calculated using oxygen diffusion model <cit.>. Figure <ref> shows the Q_0 vs. E_acc at 2.0 K for both cavities. The baseline cavity was limited to ∼ 30 MV/m with a high field Q-slope. In the same cavity, when subjected to 150 ^∘C/3 h of heat treatment, the cavity gradient reached to ∼ 44 MV/m before it quenched. Eliminating the high-field Q-slope with low-temperature baking has been a proven technique to improve the accelerating gradient <cit.>. With a further increase in the temperature of the heat treatment, the quality factor increases with increasing accelerating gradient with Q_0 reaching ∼ 4 × 10^10 at E_acc∼ 20 MV/m for 275 - 325 ^∘ C/3h with a lower quench gradient (∼ 25 MV / m) compared to the baseline test. On further increase in heat treatment temperature, the overall Q_0 gets lower while the high field Q-slope starting at ∼ 25 MV/m. For all tests, the measurements were repeated with ∼ 20 mG of residual magnetic flux density trapped during the cavity cooldown. The accelerating gradient dependence of flux trapping sensitivity was calculated using S = R_s,B_2-R_s,B_1/B_2-B_1. where R_s = G/Q_0, B_1 ≈ 0 mG and B_2 ≈ 20 mG. Figure <ref> shows the flux trapping sensitivity as a function of accelerating gradient. As shown in Fig. <ref>, the flux trapping sensitivity increase with the increase in heat treatment temperature with highest sensitivity for 250 ^∘C/3h heat treatment. On further increase in temperature, the gradient dependence of sensitivity decreases. §.§ f and Q_0 during warm up After the completion of the rf test, the cavity was connected to a network analyzer to measure the resonant frequency and quality factor as a function of temperature. To correct the Dewar pressure fluctuation, the pressure sensitivity was corrected by tracking the frequency with respect to the Dewar pressure at a given temperature. All data were corrected for at 1 atm (760 Torr) pressure. Figure <ref> shows the change in frequency Δ f = f - f_6 K and R_s = G/Q_0 as a function of temperature. The change in Δ f and R_s is clearly seen as a result of heat treatments. The baseline EP cavity (TE1-05) does not show any dip near T_c, whereas both the Δ f and depth of the dip increase as the heat treatment temperature increases up to ∼ 275 ^∘C/3h. On further increase in temperature, both Δ f and depth of the dip decrease, as shown in Fig.<ref>. The effect of the current heat treatment process not only changes the frequency behavior, but also has an effect on R_s and T_c, as shown in Fig.<ref>. § DISCUSSION AND SUMMARY The enhancement of the quality factor through Q-rise is evident following heat treatments within the temperature range of 200 to 400 ^∘C over a duration of 3 hours. In particular, a higher Q_0 manifests itself when the electronic mean free path is comparable to the superconducting coherence length. The introduction of oxygen reduces the electron mean free path thereby increasing the penetration depth, but decreasing the dissipative conductivity. The results shown here are in qualitative agreement with the a recent characterization of the oxygen diffusion process <cit.> based on Ciovati's model<cit.>. It is interesting to note that the baking around 150 ^∘C/3hr ameliorated the high field Q slope. In the context of shallow impurity diffusion profiles, where the peak supercurrent density is deformed and pushed away from the surface, this result is in qualitative agreement with the expected optimal baking time and temperature <cit.>. It should be emphasized that cavities treated within the temperature range of 250 to 325 ^∘C/3h exhibit a tendency to quench at lower gradients (20 - 25 MV/m) compared to cavities treated outside of this temperature range. Furthermore, several instances of strong multipacting were observed at this gradient range in contrast to the baseline EP'ed cavities, suggesting a possible surface modification due to heat treatment, which causes a change in secondary electron emission yield. As anticipated, heat treatment causes the change in local mean free path as well as spatial distributions of impurities, oxides, dislocation networks <cit.>. The sensitivity to flux trapping increases with rising heat treatment temperatures within the range of (200 - 250 ^∘C/3h). It is possible that within this temperature range, surface oxide fails to fully dissolve, resulting in an accumulation of higher oxide nanoprecipates near the surface, thus promoting a higher density but weak pinning centers. However, when the temperature exceeds 250 ^∘C for the same duration, the breakdown of surface oxides and subsequent diffusion into the bulk decrease the pinning centers on the surface. Consequently, the sensitivity to flux trapping gradually diminishes with increasing heat treatment temperatures, as shown in Figs. <ref> and <ref>. The observation is further corroborated by the oxygen concentration profile as shown in Fig. <ref> with peak oxygen concentration at ∼ 250 ^∘C/3h heat treatment. The change in transition temperature as shown in Fig. <ref> is qualitatively in agreement with the literature values of reduction in T_c for solute solution of oxygen in niobium <cit.>. The redistribution of the oxide on the Nb surface due to heat treatment introduce an additional scattering mechanisms and modify the electronic characteristics of the superconducting material in proximity to the transition temperature. Consequently, variations in the effective penetration depth of the magnetic field and the density of quasi-particles can occur, influencing alterations in the cavity's electromagnetic response and resonance frequency <cit.>. § ACKNOWLEDGMENTS We would like to acknowledge Jefferson Lab SRF production technical staff members for the cavity fabrication, chemical processing, heat treatment, clean room assembly, and cryogenic support. booljacowbiblatex 9 dhakal13 P. Dhakal et al., "Effect of high-temperature heat treatments on the quality factor of a large-grain superconducting radio-frequency niobium cavity", Phys. Rev. ST Accel. Beams 16, 042001 (2013). <doi:10.1103/PhysRevSTAB.16.042001> anna A. Grassellino et al., “Unprecedented quality factors at accelerating gradients up to 45 MVm^-1 in niobium superconducting resonators via low temperature nitrogen infusion,” Supercond. Sci. Technol., 30, 094004 (2017). <doi:10.1088/1361-6668/aa7afe> dhakalreview P. Dhakal, "Nitrogen doping and infusion in SRF cavities: A review", Physics Open, 5, 100034 (2020). <doi:10.1016/j.physo.2020.100034> posen S. Posen et al., "Ultralow surface resistance via vacuum heat treatment of superconducting radio-frequency cavities", Phys. Rev. Applied 13, 014024 (2020).<doi:10.1103/PhysRevApplied.13.014024> ito H. Ito, H. Araki, K.Takahashi, and K. Umemori, "Influence of furnace baking on Q–E behavior of superconducting accelerating cavities", Prog. Theor. Exp. Phys., 071G01 (2021). <doi:10.1093/ptep/ptab056> eric E. M. Lechner et al., "RF surface resistance tuning of superconducting niobium via thermal diffusion of native oxide", Appl. Phys. Lett. 119, 082601 (2021). <doi:10.1063/5.0059464> feisi F. He et al., "Medium-temperature furnace baking of 1.3 GHz 9-cell superconducting cavities at IHEP", Supercond. Sci. Technol. 34, 095005 (2021). <doi: 10.1088/1361-6668/ac1657> zhitao Z. Yang et al., "Surface resistance effects of medium temperature baking of buffered chemical polished 1.3 GHz nine-cell large-grain cavities", Supercond. Sci. Technol., 36, 015001 (2023).<doi:10.1088/1361-6668/aca12a> bashuSRF23 B. D. Khanal and P. Dhakal, “Evaluation of Flux Expulsion and Flux Trapping Sensitivity of SRF Cavities Fabricated from Cold Work Nb Sheet with Successive Heat Treatment”, in Proc. 21th Int. Conf. RF Supercond. (SRF'23), Grand Rapids, MI, USA, Jun. 2023, pp. 197-201. <doi:10.18429/JACoW-SRF2023-MOPMB042> gigi14 G. Ciovati, P. Dhakal, and A. Gurevich, "Decrease of the surface resistance in superconducting niobium resonator cavities by the microwave field", Appl. Phys. Lett. 104 (9), 092601 (2014). <doi:10.1063/1.4867339> GigiAPL G. Ciovati, "Improved oxygen diffusion model to explain the effect of low-temperature baking on high field losses in niobium superconducting cavities", Appl. Phys. Lett. 89, 022507 (2006). <doi:10.1063/1.2220059> ericjap E. M. Lechner et al., "Oxide dissolution and oxygen diffusion scenarios in niobium and implications on the Bean–Livingston barrier in superconducting cavities", J. Appl. Phys. 135, 133902 (2024). <doi:10.1063/5.0191234> gigijap G. Ciovati, "Effect of low-temperature baking on the radio frequency properties of niobium superconducting cavities for particle accelerators", J. Appl. Phys., 96, 1591 (2004). <doi:10.1063/1.1767295> bashuieee B. D. Khanal and P. Dhakal, "Insight to the duration of 120 ^∘C baking on the performance of SRF niobium cavities," in IEEE Transactions on Applied Superconductivity, 33(3), 1-6 (2023). <doi:10.1109/TASC.2023.3235311> dhakal20 P. Dhakal, G. Ciovati, and A. Gurevich. “Flux expulsion in niobium superconducting radio-frequency cavities of different purity and essential contributions to the flux sensitivity”, Phys. Rev. Accel. Beams, 23, 023102 (2020). <doi:10.1103/PhysRevAccelBeams.23.023102> desorbo W. Desorbo, "Effect of dissolved gases on some superconducting properties of niobium", Phys. Rev., 132, 107 (1963. <doi:10.1103/PhysRev.132.107> zarea M. Zarea, H. Ueki, and J. A. Sauls, "Electromagnetic response of disordered superconducting cavities" , Front. Electron. Mater. 3,1259401 (2023). <doi:10.3389/femat.2023.1259401>.
http://arxiv.org/abs/2405.09166v1
20240515080230
Resonance-Induced Anomalies in Temperature-Dependent Raman Scattering of PdSe$_{2}$
[ "Omar Abdul-Aziz", "Daniel Wolverson", "Charles Sayers", "Ettore Carpene", "Fulvio Parmigiani", "Hamoon Hedayat", "Paul H. M. van Loosdrecht" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
arabic Universität zu Köln, II. Physikalisches Institut, Zülpicher Straße 77, Köln D-50937, Germany Department of Physics and Centre for Photonics and Photonic Materials, University of Bath, BA2 7AY Bath, UK Dipartimento di Fisica, Politecnico di Milano, 20133 Milan, Italy CNR-IFN, Dipartimento di Fisica, Politecnico di Milano, 20133 Milan, Italy Dipartimento di Fisica, Università di Trieste, via A. Valerio 2, 34127, Trieste, Italy hedayat@ph2.uni-koeln.de Universität zu Köln, II. Physikalisches Institut, Zülpicher Straße 77, Köln D-50937, Germany pvl@ph2.uni-koeln.de Universität zu Köln, II. Physikalisches Institut, Zülpicher Straße 77, Köln D-50937, Germany We report a comprehensive Raman study of the phonon behaviour in PdSe_2 in the temperature range of 5 K to 300 K. A remarkable change in the Raman spectrum is observed at 120 K: a significant enhancement of the out-of-plane phonon A^1_g mode, accompanied by a suppression of the in-plane A^2_g and B^2_1g modes. This intriguing behavior is attributed to a temperature-dependent resonant excitation effect. The results are supported by density functional theory (DFT) calculations, which demonstrate that the electron-phonon coupling for the phonon modes varies and is strongly associated with the relevant electronic states. Furthermore, nonlinear frequency shifts are identified in all modes, indicating the decay of an optical phonon into multiple acoustic phonons. The study of Raman emission reported here, complemented by linear optical spectroscopy, reveals an unexpected scenario for the vibrational properties of PdSe_2 that holds substantial promise for future applications in PdSe_2-based optoelectronics. 1519Resonance-Induced Anomalies in Temperature-Dependent Raman Scattering of PdSe_2 Paul H. M. van Loosdrecht May 20, 2024 =================================================================================== § INTRODUCTION Transition metal dichalcogenides (TMDs) are a prominent category of van der Waals layered materials with outstanding optical and electronic properties and diverse correlated phases <cit.>. Among them, PdSe_2 has emerged as a material of particular interest due to its unique low-symmetry puckered atomic structure which in the literature is sometimes referred to as pentagonal <cit.> and is characterized by an orthorhombic lattice of Pbca space-group symmetry and D_2h point-group symmetry <cit.>. The distinctive bond arrangement within PdSe_2 leads to substantial interlayer interactions, resulting in significant layer-dependent electronic properties and anisotropic features <cit.>. Compared to conventional TMDs, PdSe_2 exhibits excellent stability in air, a unique linear dichroism conversion phenomenon, a high carrier mobility of ∼ 150 cm^2·V^-1·s^-1, and an exceptional long-wavelength infrared photoresponsivity <cit.>. PdSe_2 is a semiconductor with a bandgap in the infrared (IR) spectral region that is tunable from 0 to 1.3 eV when transitioning from its bulk to monolayer form<cit.>. Additionally, a sign change occurs in magnetoresistance at approximately 100 K, attributed to electronic anisotropy, despite PdSe_2 being a non-magnetic material <cit.>. These unique features make PdSe_2 a promising candidate for advanced polarization-sensitive photodetectors and other optoelectronic applications <cit.>. Raman spectroscopy has been recently used to characterize the basic vibrational properties of PdSe_2 <cit.>, however, a deeper understanding of the intricate electron-phonon interactions remains an area that deserves further exploration. For example, limited attention has been paid to the temperature-dependent behavior of Raman spectra in PdSe_2, with previous studies focusing primarily on temperatures above 300 K <cit.>. Despite demonstrating the anisotropy of lattice vibrations in PdSe_2 <cit.>, its connection to electronic states requires further clarification. Electron-phonon interactions within TMDs similar to other anisotropic-layered materials, such as black phosphorus <cit.>, give a strong Raman response. Therefore, Raman scattering is used as a powerful probe to gain insight into phonons and the way they interact with interband electronic transitions, uncovering the symmetry-dependent electron-phonon coupling of materials with anisotropic properties <cit.>. In these cases, the intensities of the phonon modes exhibit energy-dependent variations. A prototypical example is MoS_2, where the A_g mode shows an enhancement when the excitation energy coincides with the exciton states A and B <cit.>. This enhancement has been attributed to resonance-induced symmetry breaking, as strongly localized exciton wave functions effectively break the symmetry of the system, thereby activating normally Raman-inactive modes. This selective increase in the intensity of phonon modes has been linked to symmetry-dependent electron-phonon coupling, as demonstrated in resonance Raman studies of MoS_2 using different excitation energies <cit.>. Similar to other TMDs, PdSe_2 also exhibits a selective response to different excitation energies as recently shown by Luo et al. <cit.>. In this study, we present the Raman spectra of PdSe_2 using different polarization configurations. Furthermore, we offer a comprehensive analysis of the evolution of the Raman spectrum in the temperature range of 5 K to 300 K. We find that the Raman intensity of the A^1_g out-of-plane phonon modes experiences a significant enhancement at 120 K when excited with 2.33 eV, while the in-plane A^2_g and B^2_1g phonon modes are suppressed under the same conditions. This intriguing phenomenon can be understood as a result of resonance effects combined with diverse electron-phonon coupling of different modes, which were further confirmed by optical spectroscopy experiments. To gain a deeper understanding of the electron-phonon interactions responsible for these observations, we employed density functional theory (DFT) calculations. The calculations reveal details of how each phonon mode is coupled to the specific electronic orbitals. The results elucidate the intricate interplay between electronic states and phonons in PdSe_2, unveiling distinct electron-photon interactions associated with specific electronic states and phonon modes. Additionally, the study highlights phonon-phonon interactions spanning a wide temperature range that can be explained by a physical model that includes anharmonic contributions and offers an in-depth analysis of the nonlinear temperature-dependent Raman shifts of prominent optical phonon modes in PdSe_2. Such insights lay the groundwork for future developments in PdSe_2-based thermo-optoelectronic applications. § RESULTS AND DISCUSSIONS Figure <ref>(a), shows the comparative room temperature Raman spectra of bulk PdSe_2 using 532 nm (2.33 eV) laser excitation. The spectra were acquired in backscattering geometry with parallel (xx) and cross (xy) polarization configurations. We observe six characteristic Raman peaks corresponding to A_g-symmetry modes at 146.7, 209.4, and 260.6 cm^-1, and B_1g-symmetry modes found at at 148.7, 225.9, and 271.5 cm^-1. To validate the understanding of phonons in bulk PdSe_2, we performed calculations using density functional perturbation theory (DFPT) <cit.>. The phonon dispersion curves of bulk PdSe_2 along the high-symmetry crystallographic axes are presented in Figure <ref>(b), while Figure <ref>(c) presents the calculated atomic displacements corresponding to the observed Raman modes. Figure <ref>(b) also demonstrates that the phonon dispersion used in calculating the electron-phonon coupling via EPW <cit.> (discussed in the Supplementary Information) is consistent with the DFPT results. We find that the modes A_g^1, A_g^3, B_1g^1, and B_1g^3 arise primarily from nearly out-of-plane vibrations of Se atoms along the c-axis, while the modes A_g^2 and B_1g^2 predominantly stem from in-plane vibrations along the b- and a-axes, respectively. Note that the A_g and B_1g modes exhibit distinctive responses to laser polarization as a result of their different symmetry. B_1g modes are detected mainly in the cross-configuration, whereas the A_g modes are predominantly visible in the parallel configuration, allowing for their clear differentiation. Furthermore, our experimental resolution enables us to discern between the A^1_g and B^1_1g modes, thus clarifying what was reported in previous Raman studies in which a mixed mode was detected <cit.>. To understand the anisotropic optical properties, we investigated the dependence of the Raman scattering versus the polarization angle and crystal orientation. The results are reported in the inset of Figure <ref>(a). Under parallel conditions, the A^1_g mode intensity has maxima at 90^∘ and 270^∘, with a periodicity of 180^∘. The Raman intensity at a given angle θ can be fitted using curves of the form δ_(θ) = δ_a cos^2(θ + ϕ) + δ_b sin^2(θ + ϕ), where δ_a or δ_b represents the Raman intensity along the a and b crystal axes, and ϕ is the fitting parameter <cit.>. This reveals that the A_g modes exhibit two-fold symmetry, aligning with the inherent in-plane anisotropy of the structure of PdSe_2, where the a- and b-axes correspond to high and low Raman intensity of the A_g modes, respectively. The symmetry of the Raman modes is analyzed and the results are provided in the Supplementary Information. Figure <ref>(a) presents the Raman spectra of bulk PdSe_2 measured in parallel configuration, where the polarization of the incident light is taken along the a-axis of the crystal, as a function of temperature. Temperature variations induce noticeable alterations in the frequency and intensity of the three Raman-active phonon modes. All observed Raman peaks experience a blue shift, attributed to lattice thermal anharmonicity, which contains contributions from phonon-phonon scattering and volume thermal expansion effects. Interestingly, as the temperature decreases from 300 to 5 K, a prominent feature emerges, the clear visibility of the B^1_1g mode on the high-frequency side of the A^1_g which is highlighted in Figure <ref>(b). At 120 K, further changes become evident, with the peak A^1_g reaching its maximum intensity, while modes A^2_g and B^2_1g are suppressed. In this temperature range, the intensity of the Raman mode with a higher frequency A^3_g remains relatively constant. Additionally, a marked transformation in the Raman spectrum is observed at higher frequencies when the temperature decreases from 250 to 5 K. In particular, the Raman mode at 273 cm^-1, associated with the B^3_1g mode, appears in the parallel configuration similar to the B^1_1g mode, which is not normally allowed. This suggests either a subtle change in the symmetry or a resonant Raman scattering process as observed in the breakdown of polarization Raman selection rules in few-layer TMDs by resonant Raman spectroscopy <cit.> . To gain deeper insight into the anomalous phonon response of the A^1_g, A^2_g, A^3_g, B^1_1g and B^3_1g modes, we performed a Lorentzian lineshape fitting of the Raman spectra. In the following, we present the temperature dependence of Raman frequency, and full width at half maximum (FWHM) of each mode, while later we discuss the scattering strength obtained by the fittings. As shown in Figure <ref>(c), for T > 100 K, a decrease of temperature results in a linear blueshift of all Raman frequencies. Conversely, in the low temperature range, all Raman mode frequencies tend to reach constant values. The pronounced nonlinearity of the mode frequencies originates from the anharmonic phonon interactions <cit.>. Taking three- and four-phonon scattering processes into account, corresponding to cubic and quartic anharmonicities, the temperature dependence of the Raman mode frequencies can be described by the following relation: ω(T) = ω_0 + A (1+2/e^x-1) + B (1+2/e^y-1+ 3/(e^y-1)^2) Here, ω is the peak frequency which depend on temperature T, ω_0 is the bare phonon frequency at T=0 K, x = ħω_0/2k_bT, y = ħω_0/3k_bT, while A and B are the anharmonic constants for the three- and four-phonon processes, respectively<cit.>. At high temperatures, when considering only the first terms of a Taylor expansion, Eq.<ref> tends toward a linear dependence. The fit of Eq.<ref> to the Raman frequencies is presented in Figure <ref>(c) (solid lines). The behavior of the B_1g phonon modes is taken from the close-to-cross configuration where all phonon modes are observed simultaneously and is presented in the Supplementary Information. We also performed temperature-dependent Raman measurements on a trilayer PdSe_2 for comparison (see Supplementary Information). The fitted values of the anharmonic constants are presented in Table I. These coefficients are lower than those observed in bulk PdSe_2. The ratio A/B is high for phonons A_g and B_1g modes, due to the higher probability of decay process of optical phonons into two acoustic phonons than the three acoustic phonons. Additionally, the value of the A coefficient for A_g^1 is the smallest among all the modes. A similar non-linear temperature dependence of the Raman mode frequencies was also observed for other materials such as black phosphorus, MoS_2, and SnSe_2 nanosheets <cit.>. The behavior of Raman modes widths mirrors that of their frequencies as the temperature decreases: they decrease initially and then tend to saturate at temperatures below 100 K (see the Supplementary Information). Concerning the scattering strength of the phonon modes, Figure <ref> shows the temperature-dependent spectral intensity of the A_g^1, A_g^2, and A^3_g modes. Specifically, A_g^1 reaches its maximum intensity at 120 K, while A^2_g exhibits a zero intensity at the same temperature, followed by partial recovery. We note that this behavior was also observed for the B_1g modes in the close-to-cross-polarization configuration (see the Supplementary Information). To further explore the anomalous behavior of the A_g^1 mode, we performed temperature-dependent anti-Stokes Raman measurements, as reported in Figure <ref>(a). We find that the A_g^1 mode exhibits a gradual increase in intensity, reaching its maximum at about 90 K, mirroring the behavior observed in the Stokes measurements. In contrast, the A_g^3 modes showed a consistent reduction in intensity with decreasing temperature, following the typical temperature-dependent behavior and ultimately becoming undetectable at 5 K, as shown in Figure <ref>(b). The anti-Stokes to Stokes scattering ratio is a useful quantity for determining the effective phonon temperature, as discussed in Ref. <cit.>. In the absence of resonant conditions, the function ϕ can give an indication of the phonon temperature ϕ(I_S/I_AS) = -ħΩ/k_b ln(I_S/I_AS), where Ω is the phonon frequency. Figure <ref>(c) presents the values of ϕ(I_S/I_AS) determined for the A_g^1 and A_g^3 phonon modes at different temperatures. ϕ(I_S/I_AS) can be used to extract the temperature of the A_g^3 phonon, as it demonstrates a linear behavior when the sample temperature is reduced. However, for A_g^1, this relationship does not exhibit the expected linear behavior, suggesting a possible resonant excitation effect linked to A_g^1. To clarify the effect of resonance enhancement of the A^1_g mode at 120 K, we measured the temperature-dependent Raman spectra of bulk PdSe_2 employing alternative excitation energy. If the anomalous increase of A^1_g mode is due to resonance with 532 nm excitation, we anticipate that it will not be visible when using other laser lines. Figure <ref>(a) shows the results of Raman measurements detected using 491 nm (2.52 eV) excitation in parallel configuration. As the temperature decreases from 300 K to 5 K, all phonon modes exhibit a blue shift, consistent with the observations from the 2.33 eV excitation laser measurements. However, the Raman spectrum is relatively unchanged, and the subtle change in symmetry observed using 2.33 eV excitation is entirely absent. The absence of phonon-selective resonant enhancement is discernible through the negligible change observed in the intensity ratios between A^1_g to A^3_g and A^2_g to A^3_g as shown in Figure <ref>(b). By altering the excitation energy, we observed that the peculiar temperature-dependent amplitude behaviour was absent. We then investigated whether changing the energy of the electronic states had a similar effect on the anomaly. The thickness of PdSe_2 has a major influence on its electronic band structure and optical resonances. To further explore this effect and understand the dependence of this behaviour on the flake thickness, we studied the temperature-dependent Raman spectra of a mechanically exfoliated thin sample (trilayer). The electronic band gap of PdSe_2 has been reported to increase significantly with the transition from the bulk to the thin-layer regime, causing a substantial modification of the entire electronic landscape <cit.>. We observe significant frequency shifts of several Raman modes, ranging from 5 to 9 cm^-1, as we move from the bulk to the trilayer system, which is in excellent agreement with the other reports <cit.> (see the Supplementary Information). This large blueshift from bulk to trilayer is attributed to the strong interlayer coupling and the change of the in-plane lattice constants. Temperature-dependent measurements revealed that the resonance effect is not as strong in the trilayer system, and the significant increase of A^1_g that was seen in the bulk is completely absent, which is in agreement with our interpretation. Using 2.33 eV excitation, we investigated the power dependent behavior of the A^1_g and A^3_g modes at selected temperatures of 120 K and 80 K aiming to assess the influence of optical excitation on these modes under resonant (120 K) and non-resonant (80 K) conditions. As shown in Figure <ref>(c), we clearly see that the A^3_g mode exhibits a similar power-dependent behavior at both temperatures, i.e. there is no significant resonance effect, which is almost identical to the behavior of A^1_g at 80 K. In contrast, at 120 K, A^1_g exhibits a remarkable sensitivity to increasing laser power, indicating a resonant effect at this temperature. This effect can be attributed to an enhancement of the prefactor in the Raman response associated with resonant optical properties at 120 K. Consequently, all experimental observations consistently point towards a resonant Raman effect. The non-resonant components within the Raman spectrum may neutralize the resonant Raman component, resulting in the suppression of the Raman signal, as explained in <cit.>. This phenomenon accounts for the observed extinction of the in-plane A^2_g and B^2_g modes at 120 K. To gain experimental insights into the resonance phenomena in the Raman response of bulk PdSe_2 and to explore its dependence on temperature, we employed optical spectroscopy within the range of Raman excitation energy. We used broadband transient reflectivity experiments that involved photoexciting the sample with an optical pump pulse centered at approximately 2.30 eV and subsequently measuring the differential reflectivity, Δ R/R, after a 1 picosecond time delay. Figure <ref>(a) shows the 1 ps optical response at room temperature where a negative feature of Δ R/R approximately at 2.28 eV (indicated by arrows) is identified. Upon reducing the temperature to 77 K, we observed a discernible shift of the reflectivity peak towards higher energies (about 2.39 eV), as shown in Figure <ref>(b). Notably, this transition sweeps the 2.33 eV energy (dashed green line, the Raman excitation energy) as the temperature was lowered from 300 K to 77 K. Analogous behavior was also observed in other materials such as WSe_2 and MoSe_2 <cit.>. Figure <ref>(c) illustrates this transition and energy shift, resembling a resonance at 2.33 eV, possibly at 120 K. This observation supports the conjecture of a significant change in Raman modes on the basis of electronic transitions. A recent study has revealed that the transient absorption spectrum of an 8-layer PdSe_2 displays coherent oscillations at a frequency of 143 cm^-1, identified as the only optical phonon in the transient optical response, suggesting the strong interaction between the A^1_g mode and the electronic states <cit.>. Given that the excitonic characteristics of the 2.31 eV optical transition are discussed in <cit.>, it is reasonable to attribute the observed Raman resonance to strong exciton-phonon interactions, similar to those seen in resonant excitation in monolayers of WS_2 and WSe_2 <cit.>. To investigate the electron-phonon interactions, we conducted theoretical calculations based on DFT. We calculated the band structure of bulk PdSe_2 perturbed by small displacements (both positive and negative) of the atoms according to each of the Γ point phonon eigenvectors. This highlights in a simple way which optical transitions should give rise to strong phonon scattering via the deformation potential. The results are presented in Figures <ref>(a to g), revealing not only that the electron-phonon coupling is different for the A^1_g and A^3_g modes, but that they also couple to different electronic states. Specifically, the bands located approximately around 2.5 eV above the Fermi level display strong coupling with A^1_g, although the overall electron-phonon coupling is more pronounced for A^3_g (we show the calculated strengths of the electron-phonon coupling for all the Raman modes active in backscattering in the Supplementary Information). Hence, through resonant optical excitation, the phonon behavior of PdSe_2 can be manipulated due to the strong and diverse electron-phonon interactions specific to each phonon mode and electronic state. These results illustrate the significant potential of resonance Raman scattering in examining layered TMDs, particularly in highly anisotropic PdSe_2. § CONCLUSION We examined phonon behavior in bulk PdSe_2 using Raman spectroscopy over a temperature range of 5 to 300 K. We found that there is a temperature-dependent enhancement of specific Raman modes linked to resonance effects with particular optical transitions. For 2.33 eV excitation, we observed an opposite response: a significant increase in the intensity of the out-of-plane A^1_g mode, which peaks at 120 K, while there was a corresponding decrease in the in-plane A^2_g and B^2_g modes, while the out-of-plane A^3_g mode remains almost unchanged. The interpretation of the experimental results is supported by density functional theory (DFT) calculations, which confirm an enhanced electron-phonon coupling strength related to relevant optical transition energies. The anomalous behavior at 120 K can be explained by electron-phonon coupling and resonance effect in PdSe_2. Additionally, we observed nonlinear frequency shifts in all phonon modes, originating from multiple phonon scattering decay channels. This work provides essential insights into the low-temperature electronic and vibrational properties of this anisotropic material, offering valuable information for further applications of the thermal characteristics of PdSe_2 in optoelectronic devices. Our study serves to inspire further experimental and theoretical advancements in understanding the physics of two-dimensional materials. § METHODS §.§ Sample preparation and characterization: The bulk PdSe_2 crystals were purchased from HQ Graphene <cit.>. PdSe_2 thin flakes were mechanically exfoliated from the bulk crystals using Nitto SPV 224R tape and then transferred to a substrate of 280 nm SiO_2/Si substrate. An optical microscope was used to determine and identify the thinner flakes based on their optical contrast. Using a Veeco Dimension 3000 atomic force microscope (AFM) the thickness of the trilayer was determined. §.§ Experimental procedures and measurement technique: The Raman spectra of bulk and trilayer PdSe_2 samples were measured, using a micro-Raman system in backscattering geometry. To prevent any significant local temperature increase in the sample, an appropriate laser power was employed during the experiment. Two lasers with excitation energies of 2.33 eV (532 nm) and 2.52 eV (491 nm) were used. All measurements were performed in a vacuum using an optical coldfinger cryostat (Janis ST500) with a working distance (about 2 mm). The laser beam was focused to a spot size of 1 μm onto the sample by a 50× microscope objective lens and a long working distance of 12 mm. The laser power was set to a maximum of 100 μ W. The backscattered light, collected by the same objective lens, was transmitted and directed onto the spectrometer's entrance slit. Following this path, the scattered signal underwent dispersion within the spectrometer and was subsequently detected using a liquid nitrogen-cooled back-illuminated charge-coupled-device (CCD) detector, specifically a low-etaloning PyLoN:1000BReXcelon, featuring a 1340 × 100 pixels CCD. Polarization-resolved Raman measurements incorporated an analyzer into the light path detection to determine the scattered light's polarization direction. Throughout the measurements, the PdSe_2 samples remained fixed on the sample stage, while the polarization directions of the incident and scattered light were manipulated using a halfwave plate and analyser in steps of 15 degrees. The Raman spectra were obtained under both parallel and cross configurations. In the parallel configuration (xx), the scattered light's polarization aligned parallel to the incident light, while in the cross configuration (xy), the polarization was perpendicular. This distinction was achieved using the same analyzer. §.§ Phonon displacements and DFT calculations: Phonon displacements and eigenmodes were computed using VASP <cit.> by evaluating force constants for a 2×2×2 supercell through the frozen phonon approach implemented in phonopy. The structures were relaxed in VASP to achieve forces below 10^-6 eV/Å, with a kinetic energy cutoff of 300 eV and a Monkhorst-Pack k-point density of 8×8×6 for the primitive unit cell <cit.>. In the DFT calculations, we employed the regularized SCAN meta-GGA exchange-correlation functional <cit.>, along with the rVV10 kernel for the inclusion of van der Waals forces between layers <cit.> (more details can be found in the Supplementary Information). § DATA AVAILABILITY The experimental data are accessible through: https://doi.org/10.5281/zenodo.10424154https://doi.org/10.5281/zenodo.10424154 Inputs to the computational codes are available free of charge at https://doi.org/10.15125/BATH-01356https://doi.org/10.15125/BATH-01356 §   § ACKNOWLEDGEMENTS The authors acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG) through project German Research Foundation via project No. 277146847 - CRC 1238: Control and Dynamics of quantum Materials. Computational work was performed on the University of Bath’s High Performance Computing Facility and was supported by the EU Horizon 2020 OCRE/GEANT project “Cloud funding for research” . § AUTHOR CONTRIBUTIONS O.A. conducted the Raman measurements and analyzed the data. The study was supervised by H.H. and P.v.L. D.W. developed the theoretical support in discussions with E.C. and conducted the calculations. C.S. performed the optical spectroscopy experiments. O.A. wrote the manuscript with the support of H.H. The results were discussed and the paper was reviewed by O.A., D.W., C.S., E.C., F.P., H.H., and P.v.L. § COMPETING INTERESTS The authors declare no competing financial or non-financial interests. § ADDITIONAL INFORMATION Supplementary Information The online version contains supplementary material available at https://doi.org/XXX.
http://arxiv.org/abs/2405.09282v1
20240515120530
Three-Dimensional Path Planning: Navigating through Rough Mereology
[ "Aleksandra Szpakowska", "Piotr Artiemjew" ]
cs.RO
[ "cs.RO", "cs.SY", "eess.SY", "68T40, 70E60, 52B05, 37M05, 80M50", "I.6.3" ]
Logical coherence in 2D compass codes Kenneth R. Brown Received May 20, 2024; ===================================== In this paper, we present an innovative technique for the path planning of flying robots in a 3D environment in Rough Mereology terms. The main goal was to construct the algorithm that would generate the mereological potential fields in 3-dimensional space. To avoid falling into the local minimum, we assist with a weighted Euclidean distance. Moreover, a searching path from the start point to the target, with respect to avoiding the obstacles was applied. The environment was created by connecting two cameras working in real-time. To determine the gate and elements of the world inside the map was responsible the Python Library OpenCV <cit.> which recognized shapes and colors. The main purpose of this paper is to apply the given results to drones. § INTRODUCTION In the field of mobile robotics, tasks such as path planning necessitate a comprehensive representation of the robot's surroundings, including obstacles, landmarks and other robots, coupled with an efficient reasoning framework. Mobile robotics draws upon a wealth of concepts and techniques from Computer Science and Artificial Intelligence, utilizing graph-based methods and algorithms like A* for search or planning efforts, graph mappings for constructing maps, use of computer vision for map navigation among others. In our research, we are continuing to develop the innovative concept of path planning by means of post-map expansion via a mereological potential field algorithm, which was originally introduced in the works of Polkowski and Osmialowski <cit.>. The theoretical foundations were introduced by Polkowski in the work <cit.>. This paper focuses on the application of the main principles of rough mereology <cit.> to the three-dimensional environment. The study extends the work of <cit.>, where approximate fields of mereological potential were employed to explore the three-dimensional map, respecting the distribution of obstacles in the environment. Additionally, an algorithm for searching the created potential fields was implemented, where the selection is based on weighted distance calculation to the goal, followed by selecting the field with the minimum value of the computed distance. To improve the results, a path optimization function was initiated, along with subsequent path smoothing. The environment from which the map was created was developed using <cit.>. The map - gate was adjusted to real dimensions, actual distances between markers were measured and transferred to the virtual world, and the coordinates of map points were normalized based on real-world distances. The goal of the paper is to demonstrate a ready-to-use tool for application on a devices, such as a drones. §.§ Use of Rough Mereology in the Control Environment of Intelligent Agents This section explores the application of rough mereology for generating potential fields. The application of rough mereology introduces the notion of rough inclusion, symbolized as μ (x,y,r). This concept asserts that x is partially included in y with a minimum degree of r. Focusing on spatial entities, rough inclusion is defined as μ (X,Y,r) if and only if |X ∩ Y|/|X|≥ r, where X and Y denote n-dimensional solids, and |X| represents the n-volume of X. This study explores a planar case of an autonomous mobile robot moving within a three-dimensional space. Here, the spatial entities X,Y are considered as conceptual regions, with |X| indicating the area of X. The rough inclusion μ (X,Y,r) plays a role in forming the rough mereological potential field. The constituents of this field are square in shape, and their relative proximity (distance in our rough mereological universe) is quantified as: K(X,Y)=min{max_rμ(X,Y,r),max_s μ(Y,X,s)}. An elaborate discussion on the construction of this field is found in Sect. <ref>. The immediate importance of using rough mereology in creating mereological potential fields is to provide a robust framework for representing, analyzing, and navigating spatial environments, especially in scenarios characterized by uncertainty and complexity. Due to the limited size of this paper, for a detailed study of the properties that provide the theoretical foundation for our work, we refer the reader to Section 2.3 of paper <cit.>. The subsequent document sections are organized as follows. Section <ref> introduces the methodology for route planning that employs a rough mereological potential field. In Section <ref>, we detail the experimental setup used. Conclusively, in Section <ref>, we offer a summary of our study. § METHODOLOGY In this section, we will focus on the different techniques used to build a target robot guidance system using rough mereological potential fields in a 3D environment. In the following subsections, we will explain the working principle of the algorithm responsible for generating the mereological potential field. Importantly, the initialization value is the target coordinate. Such a declaration contributes to a large accumulation of potential fields relative to the target. In the first iterations of the algorithm, the distance is the smallest (its value increases with subsequent iterations of the algorithm), which results in the accumulation of potential fields around the value that initializes the algorithm. The idea is that the generated potential fields attract our starting point to the goal. §.§ Adapted Square fill algorithm in 3D environment In this chapter, we are going to show our conception of a 3-dimensional potential field algorithm introduced by Ośmiałowski <cit.>. The mentioned Squared Fill algorithm method was already modified and later presented in Polkowski <cit.>, Żmudziński and Artiemjew <cit.> and Szpakowska and Artiemjew <cit.>. Below we present our approach to generating potential fields in the 3-dimensional environment- Fig. <ref>, Fig.<ref>. The idea of an algorithm is to generate the neighbors around the main point (goal). In the 2D concept, there were eight crucial neighbors, which determined the potential field position in each iteration. The 3D case became more complex. Adding the third dimension 'z' forced us to implement more neighbors to be considered in the potential field generating. We have to consider the constant value 'z', increasing 'z' and decreasing 'z'. The mentioned operation gave us 24 neighbors, instead of 8 (N, W, S, E, NW, NE, SW, SE) which were in the 2-dimensional space. The number of neighbors depends on directions. Below we can see the main assumptions of the structure of the algorithm and also the result that we obtained - Figs. <ref>, <ref> and <ref>. * Declare the basic values: - Set the current distance to the goal: d = 0,- Put the direction value on standby clockwise = True * Prepare an empty queue Q: Q = ∅ * Add into created queue Q first potential field p(x,y,z,d), where x,y,z express the location coordinates of already created field and d reflects the current distance to the goal: Q ∪{p(x,y,z,d)} * Iterate through Q, * Determine neighbors created with respect to a current direction: if clockwise is true: N = {[ p_0 = p(x-d,y,z,d),; p_1 = p(x-d,y+d,z,d),; p_2 = p(x,y+d,z,d),; p_3 = p(x+d,y+d,z,d),; p_4 = p(x+d,y,z,d),; p_5 = p(x+d,y-d,z,d),; p_6 = p(x,y-d,z,d),; p_7 = p(x-d,y-d,z,d); p_8 = p(x-d,y,z-d,d),; p_9 = p(x-d,y+d,z-dd),; p_10 = p(x,y+d,z-d,d),; p_11 = p(x+d,y+d,z-d,d),; p_12 = p(x+d,y,z-d,d),; p_13 = p(x+d,y-d,z-d,d),; p_14 = p(x,y-d,z-d,d),; p_15 = p(x-d,y-d,z-d,d); p_16 = p(x-d,y,z+d,d),; p_17 = p(x-d,y+d,z+d,d),; p_18 = p(x,y+d,z+d,d),; p_19 = p(x+d,y+d,z+d,d),; p_20 = p(x+d,y,z+d,d),; p_21 = p(x+d,y-d,z+d,d),; p_22 = p(x,y-d,z+d,d),; p_23 = p(x-d,y-d,z+d,d); ]} if anticlockwise is true: N' = {[ p_0 = p(x-d,y-d,z,d),; p_1 = p(x,y-d,z,d),; p_2 = p(x+d,y-d,z,d),; p_3 = p(x+d,y,z,d),; p_4 = p(x+d,y+d,z,d),; p_5 = p(x,y+d,z,d),; p_6 = p(x-d,y+d,z,d),; p_7 = p(x-d,y,z,d); p_8 = p(x-d,y-d,z-d,d),; p_9 = p(x,y-d,z-d,d),; p_10 = p(x+d,y-d,z-d,d),; p_11 = p(x+d,y,z-d,d),; p_12 = p(x+d,y+d,z-d,d),; p_13 = p(x,y+d,z-d,d),; p_14 = p(x-d,y+d,z-d,d),; p_15 = p(x-d,y,z-d,d); p_16 = p(x-d,y-d,z+d,d),; p_17 = p(x,y-d,z+d,d),; p_18 = p(x+d,y-d,z+d,d),; p_19 = p(x+d,y,z+d,d),; p_20 = p(x+d,y+d,z+d,d),; p_21 = p(x,y+d,z+d,d),; p_22 = p(x-d,y+d,z+d,d),; p_23 = p(x-d,y,z+d,d); ]} * Count the Euclidean distance from neighbors created in the previous iteration and already generated potential fields to avoid the superfluous fields, * If Euclidean distance of topical potential field p_k(x,y,z,d) is less than 15 and p_k(x,y,z,d)∪ O, where O is the collection of obstacles coordinates, what's more, p_k(z,y,d)∪ F, where F comprising potential fields created to meet the aforementioned conditions, then the current field is to be abandoned. In such a scenario, the system is required to backtrack to point 4, * Find out if in Q is any identical potential field if exists drop the current neighbor, and go back to point 4, * After the end of filtration add the accepted potential field p_k(x,y,z,d) into the end of list Q, * Expand the distance value to the goal: d = d(p_k)+0.5 * Change the direction to negative,if clockwise = True: change direction into anticlockwise = True, clockwise = Falseif anticlockwise = True: change direction into clockwise = True, anticlockwise = False * Drop current neighbour p_k(x,y,z,d) from dynamic queue Q and attach it into potential fields list F. According to the presented way, the distance is initialized from value 0. This number will be increasing in the next iterations because our algorithm starting to generate potential fields from a goal into a starting point. Created potential fields are represented by p(x,y,z,d), where x,y,z are proper coordinates and d describes the value of a distance, which during the operation is responsible for generating new neighbors. The bigger the distance is, the further the potential field has been created from a goal. Moreover in each iteration, the direction of generating neighbors has to be changed so as not to get stuck and explore all map. Below and in the next sections we are going to show the visualization of the results. The term 'bigger cube' describes the cube, which is almost equal to the scale of our plot. The term 'smaller cube' determines the figure, which is inside the 3D plot, and hence it is inside the 'bigger cube'. §.§ Path finding To generate the path, we are using the variation of the algorithm proposed by Osmialowski Path Search Algorithm <cit.> and later modifying by Szpakowska and Artiemjew <cit.>. Above mentioned algorithm within the given potential field is as follows: * Start iterating the loop from a list of fields given from the Adapted Square fill algorithm <ref>. * Take the first generated potential field from the list of fields (generated as the last one), this is the actual field. * Count the Weighted Euclidean Distance between the actual field, the next potential field in the list of fields - next field, and the goal. * Look for the minimum counted distance and remember the next field - that field which gave the minimum distance will be the actual field in the next iteration. To accept the distance and the field it is necessary to check if the connection between the actual field and the chosen field is possible. That means we have to take into account if the path between them does not cross any obstacle. * If our loop finishes for a given current potential field we have to change the actual field to the chosen field with the smallest Weighted Euclidean Distance. * The operation is going to be reaped for the new actual field. The algorithm stops when the actual field is close or equal to the goal point. §.§.§ Distance counting - weighted Euclidean distance To determine the distance between two potential fields we used classical Euclidean distance and variation of this distance: d(p,q) = √(∑_i=1^n (p_i - q_i)^2) In the context of a three-dimensional Euclidean plane, wherein p denotes the coordinates of a point characterized by potential fields, and q signifies the point with goal coordinates, an enhancement to path-finding accuracy is introduced by incorporating two distinct Euclidean distances. Specifically, the first distance, referred to as the classical Euclidean distance in this study, focuses on measuring the distance between the current potential fields. The second distance, denoted as the Weighted Euclidean distance, is utilized for searching the first basic path, and the classic one is used for optimization. The Weighted Euclidean distance accounts for both the distance between the goal and the current point. Additionally, this distance is subjected to a weight, achieved by multiplying it with a specific floating-point value. The weight is applied to both the distance between the goal and the current point and the distance between two potential fields. This utilization of weighted distance aims to mitigate the risk of selecting a sub-optimal path, such as a direct leap to the target without due consideration for obstacles. §.§ Field filtering- path optimizing The resultant path obtained from the Path Search Algorithm did not meet our anticipated standards of optimality and clarity. To diminish the noise within the path, we implemented a path optimization filter. This filter centers its attention on the distances between points in the generated path and the designated goal. The primary condition stipulates the following: commencing from the initial element in the path if the distances from the current points to the target are either identical or greater, those particular points are excluded. In instances where multiple points exhibit the same distance, a secondary step is introduced. We calculate the distances from the neighboring points to the target and compare them. The points characterized by the smallest values of distance are retained, ensuring a refined and more optimal path. §.§ Path smoothing Upon visualizing the optimal path from the robot's starting location to the target, we proceed to initialize the path smoothing algorithm introduced by Zmudzinski and Artiemjew in 2017 <cit.>. The algorithm is iteratively applied 'n times until the obtained result and the shape of the path meet the desired satisfaction: * We minimize the distance between points by incorporating the variable α, which dictates the rate at which we deviate from the original position x_k its adjustment considers both the preceding point x_k-1 and the subsequent point x_k+1 influencing the movement away from the initial position. x_k = x_k +α(x_k-1+x_k+1-2x_k) y_k = y_k +α(y_k-1+y_k+1-2y_k) z_k = z_k +α(z_k-1+z_k+1-2z_k) * Subsequently, we introduce a balancing step for the point x_k through the application of the β variable. This entails calculating y_k which signifies the new position of the point. By performing this operation, we aim to mitigate the tendency towards a straight-line trajectory in the path. x_k = x_k +β(x_k-x_k) y_k = y_k +β(y_k-y_k) z_k = z_k +β(z_k-z_k) The results are shown in figure <ref>. The comparison of all paths shows that we eliminated unnecessary points in the path, and as a result, our path became more unambiguous. After applying the path smoothing algorithm we get a possibility to achieve a better trajectory of a future robot movement. § EXPERIMENTIAL SECTION This section will concentrate on the underlying technical components. Initially, we will outline the programming language and associated libraries responsible for various facets of the project. Subsequently, we will explore the preparatory steps for the environment and the requisite conditions that need to be established. It is worth mentioning that determining the boundaries of the cage and the positioning of obstacles, start, and goal are flexible. The fact that the created virtual world returns correct information about the location of given points depends largely on the correctly declared real distance between the frame boundaries (frame height and width). The localization of declared points is in real-time, so there is a possibility to follow the actual robot position and the location of obstacles. In that case, we can create different maps, with another number and combination of obstacles, goal, and start - actual point. §.§ Technical aspects The entire project was developed using Python <cit.>. The key libraries crucial for this paper include Matplotlib and OpenCV. Specifically, the Poly3DCollection package <cit.> was utilized for 3-dimensional visualization. Additionally, the OpenCV library <cit.> played a pivotal role in the creation of the project environment. §.§ Environment preparation The environment had been created using Python library <cit.>. Two cameras were hung in space, one on the top of a gate and another one aside. Two types of color markers were placed, which determine the borders of a gate. The length between markers was measured, so all of the coordinates of points inside the gate were read concerning the created world dimensions. The initial phase of the preparatory process involved coding an algorithm for color recognition. Subsequently, color markers were strategically positioned within the laboratory space, and their sequence determined the order for color reading. This procedural step enabled the program to identify the recognized wall, with each camera assigned to recognize a specific color sequence. Following this, the read values were converted into real-world measurements by assessing the distance between designated markers, which were then transposed into the virtual world. This operation facilitated the extraction of real-world coordinates for various points within the environment, including the goal point, starting point, and obstacle points. The visualization of the prepared environment is depicted in figure <ref>. Within the virtual gateway, only the goal point was positioned—a single colored cube suspended in space. This gateway delineates the designated space for drone movement. The complete set of codes for this project is accessible on <cit.>. To replicate the project, access to two cameras functioning in the real world is essential, along with the installation of the required libraries. Additionally, recreating the gate necessitates arranging colored markers in the specified configuration. § CONCLUSIONS In this investigation, we effectively expanded and implemented a path planning algorithm within a three-dimensional environment, employing a rough mereological potential field. Furthermore, we conducted path optimization and subsequent smoothing, focusing on three dimensions. Real-time visualization reflecting the actual environmental conditions was achieved through the utilization of images from two cameras. The objective of the study was successfully realized, establishing the conditions and framework for the optimal navigation of a moving machine in three-dimensional real-time, particularly on a map with obstacles. This accomplishment represents a significant stride toward the practical application of the rough mereological potential field in drone-related scenarios. § ACKNOWLEDGEMENTS This work has been supported by a grant from the Ministry of Science and Higher Education of the Republic of Poland under project number 23.610.007-110 99 OpenCV OpenCV, <https://opencv.org/> polkowski2008 Polkowski, L., Osmialowski, P.: A Framework for Multiagent Mobile Robotics: Spatial Reasoning Based on Rough Mereology in Player/Stage System., pages: 142-149 (2008) Osmialowski2008 Osmialowski, P. (2013). On path planning for mobile robots: introducing the mereological potential field method in the framework of mereological spatial reasoning. Journal of Automation, Mobile Robotics and Intelligent Systems, 3(2), 24-33. Polkowski 1996 Polkowski, L.: Rough Mereology: A new paradigm for approximate reasoning, In: International Journal of Approximate Reasoning, (1996) SzpakowskaArtiemjew 2023 Szpakowska, A., Artiemjew, P., Cybowski, W. Navigational Strategies for Mobile Robots Using Rough Mereological Potential Fields and Weighted Distance to Goal. Rough Sets. IJCRS 2023., Springer, vol 14481, pages 549-564, (2023) Osmialowski 2011 Osmialowski, P.: (2022). Planning and Navigation for Mobile Autonomous Robots Spatial Reasoning in Player/Stage System (2011) Polkowski 2018 Polkowski, L., Zmudzinski, L., Artiemjew, P.: Robot navigation and path planning by means of rough mereology, Proceeding of the IEEE International Conference on Robotic Computing, pages 363-368 (2018) Zmudzinski Artiemjew 2017 Zmudzinski L., Artiemjew, P.: Path planning based on potential fields from rough mereology, Rough Sets. International Joint Conference, IJCRS 2017, Olsztyn, Poland, pages 158-168 (2017) Python <https://www.python.org/> Matplotlib <https://matplotlib.org/3.1.1/api/_as_gen/mpl_toolkits.mplot3d.art3d.Poly3DCollection.> github:project Github Szpakowska, <https://github.com/aleksandraszpakowska/3D_Rough_Mereology>
http://arxiv.org/abs/2405.09001v1
20240514235801
BEVRender: Vision-based Cross-view Vehicle Registration in Off-road GNSS-denied Environment
[ "Lihong Jin", "Wei Dong", "Michael Kaess" ]
cs.RO
[ "cs.RO", "I.2.9" ]
Cons-training tensor networks Jing Chen May 20, 2024 ============================= empty empty We introduce BEVRender, a novel learning-based approach for the localization of ground vehicles in Global Navigation Satellite System (GNSS)-denied off-road scenarios. These environments are typically challenging for conventional vision-based state estimation due to the lack of distinct visual landmarks and the instability of vehicle poses. To address this, BEVRender generates high-quality local bird's eye view (BEV) images of the local terrain. Subsequently, these images are aligned with a geo-referenced aerial map via template-matching to achieve accurate cross-view registration. Our approach overcomes the inherent limitations of visual inertial odometry systems and the substantial storage requirements of image-retrieval localization strategies, which are susceptible to drift and scalability issues, respectively. Extensive experimentation validates BEVRender's advancement over existing GNSS-denied visual localization methods, demonstrating notable enhancements in both localization accuracy and update frequency. The code for BEVRender will be made available soon. § INTRODUCTION Global localization is a crucial component that supports smooth navigation of autonomous vehicles. It is typical to equip on-board localization systems with the Global Navigation Satellite System (GNSS) modules for consistent and reliable global poses. However, in reality, GNSS signals can be blocked due to natural or artificial barriers, causing temporal system failures, where vision-based localization (VBL) serves as an alternative in GNSS-denied localization. A variety of methods have been proposed for VBL in urban scenarios <cit.>, yet off-road VBL for unmanned ground vehicle (UGV) is still challenging due to non-urban environments lacking stable and distinct visual features, such as roads and buildings. The varied and unpredictable terrain further complicates the task by inducing unstable vehicle poses, making it difficult to maintain consistent feature matching across frames. In response to these challenges, our paper presents a novel learning-based method that synthesizes a local bird's eye view (BEV) image of the surrounding area by aggregating visual features from camera images. This approach integrates a modified BEVFormer <cit.> framework with a novel rendering head, employing template matching for precise cross-view registration between ground vehicles and aerial maps in GNSS-denied off-road environments. We concentrate on 2D relocalization of unmanned ground vehicles (UGV) for non-urban settings bounded within defined areas. Equipped with trinocular RGB cameras and an Inertial Measurement Unit (IMU), the vehicle employs multi-view visual inertial odometry (VIO) for state estimation. Our aim is to achieve accurate 2D positioning relative to a geo-referenced aerial map, facilitating pose correction in the absence of GNSS signals, whether temporarily or persistently. A more detailed problem definition is in Sec. <ref>. Previous study <cit.> has explored the creation of orthographic view images by accumulating geometric features over consecutive frames, coupled with Normalized Cross-correlation (NCC) for relocalization in a GPS-denied situation. However, this approach is limited by the inherent drift of VIO systems, which can distort the accumulated geometric data, leading to inaccuracies in ground-to-air matching. Our paper introduces a learning-based strategy for generating BEV images, using a Vision Transformer (ViT) <cit.>-based network for feature encoding. This method shows improved performance in generating local BEV images and supporting vehicle localization with geo-referenced aerial maps. Other research efforts <cit.> treat vision-based localization as an image retrieval problem, requiring substantial storage for on-board localization systems. On the contrary, our approach generates local BEV images for direct template matching. This significantly reduces the need for extensive data storage, relying instead on a geo-referenced map for real-time 2D localization. In summary, our contributions are threefold: * We propose a novel learning-based framework for ground vehicle localization that combines BEV image generation with classical template matching, eliminating the extensive dataset storage requirements of image-retrieval-based localization. * We integrate the deformable attention module in <cit.> with the BEVFormer network, enhancing feature encoding by using offset networks <cit.>, followed by an efficient image rendering head as a feature decoder capable of producing detailed top-down views of the local terrain. * Through comprehensive experiments with real-world datasets, we demonstrate that our method exhibits superior localization accuracy and frequency compared to existing GNSS-denied visual localization techniques, and generalizes to unseen trajectories. § RELATED WORK §.§ GNSS-denied Vehicle Localization Vehicle localization in GNSS-denied environments can be broadly categorized into relative and absolute localization strategies. Relative localization aims to mitigate odometry drifts by fusing data from multiple onboard sensors with motion models, or by leveraging loop closures to correct drift relative to global frames <cit.>. Absolute localization, in contrast, involves constructing local maps from the vehicle's perspective and aligning them with a global georeferenced map to determine precise vehicle positions. Reference data for this process can vary, including High-definition (HD) maps <cit.>, aerial satellite imagery, Digital Elevation Models (DEM) <cit.>, and OpenStreetMap (OSM) data <cit.>. While HD maps offer high accuracy, they are costly and data intensive. DEMs, primarily used for UAVs <cit.>, cater to non-planar terrains and scale ambiguity, whereas OSM provides dense semantic and geometric details suitable for urban navigation. Aerial satellite maps present strong visual cues with detailed information for off-road localization. Significant advancements have been made in aligning ground-level images with aerial imagery for localization. Viswanathan et al. <cit.> demonstrate effective ground-to-air image matching using satellite images by warping UGV panoramic images to a bird's eye view, comparing feature descriptors, and employing a particle filter for accurate localization. Based on this, recent work <cit.> focuses on generating an orthographic occupancy map by accumulation of local features and estimation of pose through NCC, and optimizing the prediction of global pose through a registration graph <cit.>. In contrast, our approach adopts a Vision Transformer (ViT)-based <cit.> learning network to generate BEV images for ground-to-air matching, emphasizing frame-by-frame registration accuracy and reducing reliance on global trajectory optimization. §.§ Learning Vision-based Localization The evolution of vision-based localization has seen it conceptualized as an image retrieval task <cit.>, employing contrastive learning to enhance the matching of onboard camera and satellite images <cit.>. Efforts to improve image alignment include warping satellite imagery by polar transformation to match ground perspectives <cit.>, and constructing semantic neural maps from camera images <cit.>. Further innovations leverage CNNs for feature extraction and BEV representation, enabling precise localization through 3D structure inference and matching <cit.>. The advent of foundation models offers promising directions for Visual Place Recognition (VPR), demonstrating the adaptability of pre-trained models (e.g., DINO <cit.>, DINOv2 <cit.>) to diverse environments without fine-tuning <cit.>. Subsequent work <cit.> integrates dense visual feature extraction with advanced filtering and global-local pose estimation via Extended Kalman Filters (EKF) for refined localization accuracy. Our methodology aligns with these advancements, utilizing a streamlined ViT architecture for efficient and accurate BEV image rendering and localization, minimizing parameter overhead while maximizing performance. In the realm of self-driving applications, BEV representations <cit.> have been enriched by encoding temporal and spatial features, as demonstrated by BEVFormer <cit.>, which leverages attention mechanisms <cit.> for 3D object detection. Our work extends this concept by incorporating BEVFormer's feature propagation approach, ensuring our BEV representations integrate temporal information from successive frames. This strategy is complemented by recent explorations in temporal information encoding for BEV representation, highlighting the continuous evolution and application of these techniques in autonomous navigation <cit.>. § METHODOLOGY Our system contains three main components: a feature encoder mapping visual representation from camera to top-down view, a rendering head to decode learned features and render top-down BEV images, and a image registration component for localization. An overview of our system is shown in Fig. <ref>. We consider a scenario where a vehicle, equipped with trinocular cameras and an IMU, is traversing flat natural terrain. A pre-stored aerial map of the area aids in localization. The vehicle's pose is predicted by the VIO system in a local coordinate frame as follows: 𝒳_t = [x_t, y_t, θ_t] ∈𝐒𝐄(2). We assume the prediction for azimuth angle θ_t from VIO is accurate but the position estimates (x_t and y_t) may drift over time. Our system aggregates consecutive frames to construct a top-down representation of the environment for map registration. We define a 3D BEV space centered on the vehicle with a length of L, a width of W, and a height of H. The space is divided into l× w× h grid cells, so that each cell represents a cubic size of L/l×W/w×H/h in the real world. The BEV query is a 3D trainable embedding with a dimension of l× w× h representing the BEV space, and serving as the query for deformable attention modules in the encoder. All intermediate BEV features in the network also follow the same spatial dimension. The specific range and dimension chosen for our experiment are described in Sec. <ref>. Our system seeks to find the optimal pose prediction that minimizes the difference between camera feature representation and local aerial image: 𝒳^* = _𝒳ψ(I^'_bev(𝒳), I_map(𝒳)), where ψ is a function to find 𝒳^* to achieve minimum distance between two representations, and provided by template matching in our system. I_map is the subset of aerial map with respect to vehicle pose, and I^'_feat is the rendered BEV image given learned feature: I^'_bev(𝒳) = φ_render(F_feat(𝒳)), where φ_render is a mapping from the encoded feature F_feat to top-down BEV image, given by the rendering head. §.§ Feature Encoding with BEVFormer Adopting BEVFormer's framework <cit.>, we propagate consecutive frame features to capture temporal information. Within a temporal window of T seconds, n frames (3× n images in a trinocular setup) are sampled. A detailed setting can be found in Sec. <ref>. Starting with the earliest frames, camera images I^cam_t are processed through patch projection, which is a convolutional layer in our implementation, to obtain camera feature F^cam_t and sent to the encoder together with the BEV query Q and previous BEV feature B_t-1 to obtain the encoded BEV feature for current timestamp B_t. The encoding process consists of two stages: a temporal attention stage that takes in query Q and previous timestamp BEV feature B_t-1 for deformable attention: B^temp_t = DeformableAttn(B_t-1, Q), followed by a spatial attention stage that takes in temporal output and camera feature F_t for deformable attention: B^spatial_t = DeformableAttn(F^cam_t,B^temp_t), B_t is then projected to the location of the subsequent frame as B^'_t according to the movement of the vehicle given by the GPS information, using affine transformations in 𝐒𝐄(2) and bilinear interpolation: Δ𝒳 = [Δ x, Δ y, Δθ] = 𝒳_t - 𝒳_t-1, [ x_t y_t 1 ]=[cosΔθ -sinΔθ Δ x sinΔθ cosΔθ Δ y 0 0 1 ][ x_t-1 y_t-1 1 ], B^'_t(x_t, y_t) = BilinearInterp (B_t(x_t,y_t)). Subsequently, B^'_t serves as a query to the encoder together with the next camera feature F_t+1 to obtain B_t+1. Propagation continues in the temporal window until we obtain the latest timestamp feature B^T. A diagram of temporal propagation is shown in Fig. <ref>. It should be noted that B_t-1 is the same as query Q for the first frame in the temporal window: B_t-1=Q if t=0. Unlike BEVFormer, our encoder simplifies to a single layer, totaling 1.44 million parameters, while supporting effective feature learning for downstream localization tasks. The architecture of the encoding layer is shown in Fig. <ref>, and the ablation study of the number of layers can be found in Table <ref>. §.§ Deformable Attention Vision Transformer In contrast to BEVFormer that employs Deformable DETR <cit.>, our approach utilizes the deformable attention  <cit.>, which uses offset networks to calculate adjustments to each reference point. The offsets are processed by an additional convolution layer θ_offset, as shown in Fig. <ref>, and its output modifies the original reference point to generate deformed reference points. For spatial attention, offsets θ^i_offset are added to the reference points unique to each camera view i, acting as adjustments to the pixel locations of reference points. Consequently, we employ three distinct convolution layers dedicated to learning offsets as an adaptation to the trinocular system setting. The final output of the spatial attention layer is a stacking of features from three camera views, undergoing another convolutional layer to maintain the same spatial dimension as the BEV query and BEV features. The output of deformable attention heads is formulated as z^(m) = σ( q^(m)k^(m)^⊤/√(d) +ϕ(B;R) ) v^(m), where q, k, v constructs the standard transformer attention <cit.> with softmax activation σ and scale normalization √(d), enhanced by relative positional bias <cit.> in ϕ(B;R). A more detailed description of deformable attention formulation can be found in <cit.>. §.§ BEV Image Rendering Head The BEV image rendering head is designed to translate encoded features into interpretable top-down views of the vehicle's surroundings. It is a straightforward convolutional neural network (CNN) architecture that takes as input the encoded BEV features with dimensions of d× l× w, where d is the model embedding dimension. Through a series of convolutional and upsampling layers, the BEV features are processed to generate a colored image of certain size, which serves as a top-down visual representation of the BEV space around the vehicle. The rendering head ensures that the resulting BEV image retains critical spatial information required for ground-to-aerial vehicle localization in GNSS-denied environments. The detailed structure of the rendering head is illustrated in Table <ref>. § EXPERIMENTS §.§ Experiment Setting Since the satellite image[The satellite image used in this paper is provided by https://www.nearmap.com/us/enNearmap.] has a resolution of 0.229 meters per pixel, we define the length and width of the BEV space as 25.648 meters centered on vehicle position, equivalent to a size of 112×112 pixels on the aerial map. We also define the height of the BEV space as 2 meters. The space is divided into 28×28×5 3D grid cells, so that each cell represents a voxel of 0.916×0.916×0.4 m^3 in the real world. We utilize a temporal window of 5 seconds and randomly sample 5 frames in the window to compose a training sample. We conduct two main experiments, one to compare against state-of-the-art VBL methods in GNSS-denied setting <cit.>, where we use 4 sequences and split them into 80% training, 20% testing data; and another to show our model's ability to generalize across different scenes given limited training data, where we use 2 sequences for training and 4 sequences for testing. The trajectory plots for sequences used in the cross-sequence testing experiment are shown in Fig. <ref>. Training is distributed on 8 NVIDIA A100 GPUs for a total of 2500 epochs and with a learning rate of 4e^-5. The configuration of the testing computer is described in Sec. <ref> in system runtime. During the testing phase, we crop and rotate the aerial map based on the GPS ground truth position as the center of the image with a size of 874×874 pixels, which corresponds to a real-world coverage of approximately 200×200 square meters. This search region is sufficient to accommodate VIO drift for more than 10 minutes without a registration. For cross-sequence testing, loosen the assumption of drifting range and use a search region of 100×100 square meters. Our camera system captures 3 frames per second and predicts registration consistent with camera frame; therefore, sufficient for preventing from failing with the 100×100 square meter search range. We use NCC for template matching. NCC identifies the best match within the search area, maximizing similarity between the generated BEV image and the aerial map, thus predicting the vehicle's position relative to the aerial map. We observe failure cases when rendered BEV images are of moderate visual quality, whereas NCC fails in prediction. An example of failure cases is shown in Fig. <ref>. §.§ Dataset Organization We collect our real-world data set in the Pittsburgh area, with a VIO system on board. Detailed information on the sequences can be found in Table <ref>. For each training sample, we use the information of timestamp, trinocular RGB images, and GPS ground truth including x, y, and azimuth angle in the UTM coordinate system for training. The pre-processing process for cropping the aerial map can be found in Fig. <ref>. §.§ Quantitative Comparison We compare our method with GPS denied registration via occupancy mapping proposed in <cit.>, and GeoDTR proposed in <cit.>. The comparison result is shown in Table <ref>. Since GeoDTR is an image-retrieval-based method and relies on cultivating the corresponding information between camera inputs and polar transformation of aerial map images, it is required to preserve a database of candidate polar transformed images for real-world vehicle localization. We randomly sample 5000 particles within the search region at each timestamp and apply polar transformation according to the particle location on the map together with the azimuth angle of GPS ground truth. After obtaining the candidate polar images, for each timestamp, we pass in the camera images and polar images to the model, and calculate the distance between camera descriptors and polar descriptors, we choose candidate with closest descriptor distance as the top 1 prediction, and its corresponding real-world location as top 1 location, and we average the top 5 predicted locations as top 5 prediction. Registration accuracy To evaluate the accuracy of vehicle registration, we calculate the mean and standard deviation (STD) of absolute position error (APE) between predicted position and the ground truth vehicle location given by on-board GPS. Registration frequency In the real-world localization scenario, the update frequency is another important factor that determines the stability of registration system. We report the matching frequency by counting the total successful matches when the APE is within a threshold of 10 meters (the range we deem tolerable for our VIO system) and calculate the match rate as total successful matches divided by total camera frames for a sequence: 𝐩_i = (x_i, y_i), 𝐝^i_Euclidean = ‖𝐩^i_gt - 𝐩^i_pred‖_2, p_match = 1/N∑_i=1^N 1·(𝐝_Euclidean^i< 𝐝_threshold), where N is the number of images for a sequence per camera module. It should be noted that the method proposed in <cit.> accumulates geometric features on a certain number of consecutive camera frames (50 by default), leading to a limited number of registration try-outs throughout a sequence. For comparison sake, we calculate the match rate as the total number of successful matches divided by the total number of occupancy maps synthesized in a sequence. It takes up to 21 hours to sample polar images for 5000 particles for 320 testing samples; hence, we cannot further increase the density of particles. To apply image-retrieval-based method for on-board localization, it is required to have a pre-stored dataset, specifically in our case, of polar images sampled from all candidate positions on local aerial map enumerating all possible rotations, which is prohibitively expensive storage for on-board system in real-world localization. System runtime Testing is performed on a machine equipped with an AMD Ryzen 9 5900X 12-Core processor and a NVIDIA GeForce RTX 4090. The total time to localize 280 testing samples is 33.32 seconds, equivalent to 0.12 seconds to localize per camera frame. The camera frame rate for our system is 3 per second; therefore, our system is able to support online localization in real-world scenario. §.§ Qualitative Comparison Visualizations of the rendering and registration result can be found in Fig. <ref>. The image rendering head processes the encoded BEV feature with a spatial dimension of 64×28×28 through a set of convolutional layers and 4 upsample layers, as shown in Table <ref>. The final BEV image is an RGB image with a size of 224×224 pixels, representing an area of 51.296×51.296 m^2. The occupancy map reconstructed from <cit.> aggregates geometric features from 50 consecutive frames, of which the coverage may vary for each prediction. §.§ Model Generalization To test the generalizability of the proposed system, we perform cross-sequence tests. Specifically, training with sequences 3 and 8 while testing with sequences 4-7. We report search regions of 100× 100 m^2 in Table <ref>. §.§ Ablation Study In this section we explore the influence of choosing different hyperparameters and BEV space resolutions on the final registration result. Since the aerial map resolution is 0.229 meters, we experiment with the BEV grid resolutions of 0.458 meters and 0.916 meters, corresponding to 2 pixels and 4 pixels on the map, respectively. We also experiment with an increased number of layers and report the results in Table <ref>. Considering the result from ablation study, we choose the resolution of the BEV grid as 0.916 meters, and the number of encoder layers as 1 for Table <ref> and Table <ref>. § CONCLUSION AND FUTURE WORK We present a learning-based system to generate local BEV images combined with NCC for ground vehicle localization in GNSS-denied off-road environments. Our system incorporates the deformable attention module with BEVFormer for a multi-view camera sensor setting, followed by a novel rendering head to generate high-precision BEV images to enable downstream localization task. To enhance our ground vehicle localization system for operation across different seasons, future research will focus on improving the network's ability to learn and generalize features from varied seasonal landscapes. This is essential for deploying our system in real-world scenarios where environmental conditions fluctuate significantly over the year. Additionally, we aim to advance the fidelity of BEV image generation by incorporating techniques such as the diffusion module, inspired by the diffusion transformer <cit.>. This enhancement is expected to refine the detail and precision of the BEV images, thus enriching contextual data for more accurate vehicle localization. Further improvements will also explore the integration of temporal features to accumulate historical data more effectively, addressing current limitations caused by projection adjustments and vehicle pose changes. Moreover, explorations can be made on removing dependence on GPS information for training by leveraging local state estimates from VIO. Furthermore, a sophisticated approach to incorporate data from previous frames could significantly improve rendering quality and system performance. In addition, a transition from classic template matching to learnable template matching for vehicle positioning is anticipated to overcome the limitation of NCC's uniform pixel weighting, as shown in Fig. <ref>, and to enable the system to prioritize strategically significant areas, potentially elevating the accuracy of vehicle registration in challenging environments. § ACKNOWLEDGEMENT The authors thank Kaicheng Yu and Wenshan Wang for the fruitful discussions and valuable suggestions throughout the project, and staff members of the National Robotics Engineering Center (NREC) for helping with data collection. IEEEtran
http://arxiv.org/abs/2405.10050v1
20240516123532
Voronoi Graph -- Improved raycasting and integration schemes for high dimensional Voronoi diagrams
[ "Alexander Sikorski", "Martin Heida" ]
cs.CG
[ "cs.CG" ]
1]Alexander Sikorski 2]Martin Heida [1]Freie Universität Berlin, Zuse Institute Berlin [2]Weierstrass Institute for Applied Analysis and Stochastics The computation of Voronoi Diagrams, or their dual Delauney triangulations is difficult in high dimensions. In a recent publication Polianskii and Pokorny propose an iterative randomized algorithm facilitating the approximation of Voronoi tesselations in high dimensions. In this paper, we provide an improved vertex search method that is not only exact but even faster than the bisection method that was previously recommended. Building on this we also provide a depth-first graph-traversal algorithm which allows us to compute the entire Voronoi diagram. This enables us to compare the outcomes with those of classical algorithms like qHull, which we either match or marginally beat in terms of computation time. We furthermore show how the raycasting algorithm naturally lends to a Monte Carlo approximation for the volume and boundary integrals of the Voronoi cells, both of which are of importance for finite Volume methods. We compare the Monte-Carlo methods to the exact polygonal integration, as well as a hybrid approximation scheme. Voronoi Graph - Improved raycasting and integration schemes for high dimensional Voronoi diagrams [ May 20, 2024 ================================================================================================= empty § INTRODUCTION Since they have been discovered in the first half of the 20th century, Voronoi Diagrams <cit.> and Delaunay triangulations <cit.> have become fundamental cornerstones in computational geometry and computational sciences. They are often used for clustering or mesh generation and find applications in many fields such as physics, biology, astronomy as well as archeology, physiology and economics <cit.>. Several methods were suggested for their calculation such as the Bowyer-Watson algorithm <cit.> to directly compute the Voronoi Diagram or the projection in higher dimension and using the quickhull algorithm <cit.> in order to calculate the Delaunay triangulation. The presented work is one out of a series of two papers. It builds heavily upon the Voronoi Graph algorithm by Polianskii and Pokorny <cit.>. The core of the algorithm relies on a raycasting procedure computing the first intersection of a given ray with the Voronoi cells' boundaries based on a nearest neighbour search (NN). In the original version the authors use a continuous binary search to approximate the intersection point whose precision depends on the number of NN evaluations. We replace this subroutine by a deterministic version that terminates with only a few NN evaluations which is not only faster but also exact. We furthermore show how the random search can be modified to obtain an exhaustive search returning the exact Voronoi diagram whilst avoiding the recomputation of already discovered vertices and compare its runtime with the classical qHull algorithm. Note however, that unlike qHull this is an iterative algorithm, which in turn can be used to compute approximate or local Voronoi diagrams as well. The algorithm performs with Nln(N) for N iid distributed nodes and hence scales as good as the Bowyer-Watson algorithm. However, to our knowledge all previous works require the input nodes to be in general position, i.e. there is no point in with more than d+1 nearest neighbors at once. In a companion paper <cit.> it is shown how this algorithm can be generalized to any set of points therefore also facilitating the computation of degenerate Voronoi diagrams. It furthermore generalizes the algorithm from this article to be applied to polytopal domains or periodic diagrams. Wheras that article is very mathematical, particularly due to the treatmend of nodes in non-general positions, the current work focuses on the fundamental idea of the raycasting algorithm. Based on raycasting approach we furthermore introduce several methods to calculate or approximate the volume of Voronoi cells and the area of interfaces between cells as well as several methods to approximate integrals of functions over cells or interfaces. The improved , exhaustive search and Monte-Carlo integration where initially implemented in the lightweight Julia package <cit.> and in this article we will follow this more simplistic approach for didactic reasons. These ideas were picked up and extended in <cit.> providing exhaustive documentation, support for points in non-generic position, boundary conditions, the polygonal and heuristic integration rules and more, which are described in more detail in the accompanying publication <cit.>. §.§ Outline In section <ref> we outline the idea of calculating the Voronoi Diagram using the raycasting approach, we introduce our new incircle procedure and provide the exhaustive graph traversal to compute the entire Voronoi diagram. In Section <ref> we will introduce two different methods to calculate cell volumes as well as interface areas and three different ways to integrate functions over cells and interfaces numerically. We will introduce the algorithms and also briefly discuss their respective mathematical background. Finally, in Section <ref> we will study the performance as well as the accuracy of our proposed methods. We compare the performance of the incircle to its bisection predecessor and the compute time for the entire Voronoi diagram of our implementations to qHull. We conclude with a numerical comparison of the integration routines. § VORONOI DIAGRAMS VIA RAYCASTING §.§ Notation We start by collecting a few key terms and notations. For a more comprehensive introduction and summary of the geometric aspects in the context of this algorithm, we recommend the work by <cit.>. As mentioned in the introduction, we assume that the nodes X=(x_1,…,x_N) are in general position. This implies that a vertex ν is defined by d+1 nodes, which we also call generators, all being the nearest neighbor to ν at a common distance. We refer to the set of all Voronoi vertices as and the Voronoi cell generated by x_i ∈ X as C_i. The Voronoi diagram is dual to the Delauney complex in the sense that every k-face of a Voronoi simplex is dual to a d-k simplex of the Delauney complex. , e.g. a (d-dimensional) cell C_i is dual to the 0-simplex X_i, a (0-dimensional) vertex is dual to the d-simplex spanned by its d+1 generators and an (1-dimensinal) edge is dual to the d-1 simplex spanned by the d generators surrounding the edge. In general we will denote by σ⊂𝒫({1,...,N}) the indices to a set of generators and by X_σ := { X_i | i∈σ} the set of the corresponding generators. Due to the duality we will also sometimes refer to a vertex, an edge, etc. directly in terms of their generators σ. In the context of the algorithm we store each vertex ν by the tuple of its generators and its coordinate, (σ, r). An edge emerging at a vertex (,r) can hence be identified with a subset ⊂ of d generators. The edge is characterized by starting at r and pointing away from the single generator x_i, {i}=∖ and along the vector u orthogonal to the hyper plane spanned by the generators X_. We call E()⊂ the set of all d+1 edges emerging at the vertex and E(,i)⊂ E() the set of all d edges emerging at σ and sharing the common generator i∈. Whilst usually each egde has two vertices _^1 and _^2, i.e. =_^1∩_^2 some edges of the Voronoi diagram will become unbounded and are thus specified by a single vertex and a direction only. These edges belong to cells that are unbounded themselves and we denote the set of unbounded edges by ^∞. For convenience in the algorithm, we store all nodes X, vertices := {ν}, edges E and unbounded edges ^∞ in a single data structure, the mesh : =(X,,E,^∞) . §.§ The incircle procedure The fundamental algorithm for the computation of the Voronoi vertices is the procedure. Here, we present a novel algorithm that improves upon its predecessor by requiring fewer nearest-neighbor evaluations and returning the exact vertex rather than an approximation. It is based on the so-called incircle-criterion, which states that a point candidate is a vertex of the Voronoi diagram corresponding to X if and only if a sphere surrounding candidate contains at least d+1 generators but no additional generators within its interior. The algorithm starts from candidate position (usually a vertex) which is equidistant to a set of generators X_η := {X_i|i ∈η}⊂ X specified by η and a normalized search direction ∈, ||||=1 that is orthogonal to the lower dimensional affine space spanned by X_η. In the following we describe the situation of searching for a vertex, i.e. where |η| = d describes an edge and the candidate position consists of a known vertex' coordinates. However, this algorithm works for any |η| ≤ d to find the respective next lower dimensional facet (e.g. a face given the whole cell in the case |η|=1). Note that any vertex r' having η as subset of its generators must lay on the ray specified by r+tu for some t. Furthermore it must satisfy the incircle condition, i.e. there needs to be another generator X_j∈ X, j ∉η such that r' is equidistant to X_σ and X_j and there is no other generator. We restrict our search to t>0, i.e. only into into the direction of u, for the uniqueness of the solution. This is achieved in an iterative manner by looking for possible generators of the desired vertex via a nearest neighbour search (restricted to the proper halfspace by t>0) around the candidate . Once a possible generator candidate is found, the resulting vertex candidate ' = + t has to be equidistant to and a known generator _0∈ X_η, which allows to compute its hypothetical position along the ray[Whilst this projection was already used in <cit.>, they did not take advantage of it in the way we do by starting the nearest neighbour search at the computed position]: |'-_0|^2=|'-|^2 ⟺ |-_0|^2 + 2t < ,-_0> =|-|^2 + 2t < , -> which ultimately yields t = | - |^2 - | - _0|^2/2 < , (-_0)>. We then continue iterating this procedure of nearest neighbour search and projection from the respectively current vertex candidate r'. This continues until at some point no new generator is found which is closer to ' then the previously known ones. This means that the candidate indeed satisfies the incircle criterion and thus indeed is a vertex of the Voronoi diagram. That this routine indeed terminates follows from the fact that the search radius is decreasing over the iterations and only a finite number of candidates is available. For a mathematically rigorous formulation and proof of this argument, even for nodes in degenerate positions, we refer to <cit.>. The search performance can be improved by a good initial guess. If the start point for the search was the desired target vertex the algorithm would terminate immediately. For a heuristic, let us assume that the generators of the desired target vertex form a regular simplex, i.e. they are equidistant to each other. We can then use the relation between the circumsphere's radius r_d and the edge length l in for a regular d-simplex, r_d = l√(d/2(d+1)), to construct the vertex position. Using the Pythagoras' theorem (see Figure <ref>), h_d^2 + r_d-1^2 = r_d^2, and solving for the height h_d of the new vertex, i.e. the distance between the center of the d-1-simplex and the d-simplex we arrive at h_d=r_d-1/√((d+1)(d-1)). In Algorithm <ref>, InitialHeuristic we shift the candidate point onto the plane spanned by the d prescribed generators σ by an orthogonal projection, compute the radius r_d-1 as the distance of this point to a generator and walk length h_d in the provided direction u. §.§ The VoronoiGraph algorithm Using the algorithm we are in the position to travel along the edges of the Voronoi Diagram and compute new vertices from old ones, as well as discovering them ab initio by shooting rays onto successively lower-dimensional faces, just as in the original work <cit.>. In that work they also introduced a random walk algorithm that approximates the Voronoi Diagram. They rightfully argue that the complexity of Voronoi diagrams explodes in higher dimension (see Section <ref>) and hence only approximate Voronoi diagrams can be used in that setting. Of course our incircle can be used in that setting as well, inheriting any increase in performance. However, as we will see in Section <ref>, we can use the improved as well to compute the exact Voronoi diagrams with competitive performance. To this end we modify the original random walk to perform an exhaustive search which successively visits all vertices' neighbours and thus explores the whole diagram. We furthermore keep track of the edges in the sense that we count how many vertices of each edge we have visited so far. This does not only save walking any edge twice. As it may happen that two vertices belonging to a single edge can be explored from different position this makes the along that edge completely obsolete. Ignoring boundary effects, in d dimensions every vertex has d+1 edges and any edge belongs to 2 vertices, so that the number of edges equals d+1/2 times that of the vertices. By keeping track of the edges we discover every vertex only once, resulting in n calls to , as opposed to n(d+1)/2 when not keeping track of the edges. The complete procedure is given in Algorithm <ref>. Note that the dictionary keeping track of the edges may consume a lot of memory. In <cit.> the cells C_i are explored in sequential order. This allows to reconstruct the edge counts for each cell individually and thereby reduces the memory footprint of storing the edges. §.§ Remark on the computational complexity As noted before the computation of the full Voronoi diagrams in high dimensions is an inherently hard problem. In <cit.> a lower bound for the expectation value of the number of vertices per cell is provided in the limit for n→∞ uniformly distributed nodes X. This lower bound is proportional to the following constant for k=d and suggesting a superexponential growth: C_k,d≥2π ^ k/2 d^k-1/k(k+1)β(dk/2, d-k+1/2)^-1(Γ(d/2)/Γ((d+1)/2))^k . Here β and Γ denote the beta resp. gamma functions. In the following table we computed the expected number of verticed for d=1, ... 10: Using HighVoronoi.jl with periodic boundary conditions we arrive at close empirical numbers: As one can verify qualitatively, the increase of vertices in the above data goes with a factor d+2 from d⇝ d+1, while the number of neighbors increases by a factor between 2.2 and 2.6. In this regard the sheer amount of vertices to compute for an exact exact Voronoi diagram in high dimensions, somewhere around d>10, will prohibit a solution. In special cases, i.e. when staying away from the limit of n→∞ uniform nodes, we are still able to compute the exact solution. For example we can compute the diagram for n=100 nodes in dimension d=9 where the number of neighbours and hence also vertices is limited by the data. Note however, that in that case probably almost all cells are neighbouring each other at the boundary of the diagram. Whether such an computation makes any sense depends on the application. However, our improved can still be used for the approximate voronoi graph computation suggested in <cit.> as well as for the Monte-Carlo estimators below. As we will see below the full exploration routine also provides a conceptually simple but equiperformant alternative to state of the art methods as qHull and it proved usefull for the development and testing of the (experimental) correctness of the proposed . § INTEGRATION In this section we introduce three different approaches for handling area and volume integrals. We start with the most general, a Monte-Carlo approximation using random rays, which allows for the estimation of the area and volume respectively functions f over them. We then show how to exactly compute the areas and volumes by recursively constructing adequate pyramids and computing their size with the Leibnitz rule, providing an improved algorithm for the calculation of the corresponding determinants. This in turn will lead to a method providing the exact integral of a piecewise linear interpolant of a given function. Finally we combine both approaches to obtain the heuristic Monte-Carlo method, that uses Monte-Carlo to estimate the area and computes the linear interpolants integrals based on that estimate. §.§ Monte-Carlo Integration The basic idea of the Monte-Carlo method is to sample directions uniformly from the sphere and determine the intersection of these rays with the cell's bondary. By weighting the function evaluations at these positions via the change of variables we obtain a classical Monte-Carlo estimator for the integrals. We start by sampling a random direction uniformly on the sphere (^d-1). This can be achieved by renormalizing the draw from a multivariate Gaussian ŷ∼𝒩(0, I(d)): y ∼(^d-1) Using the algorithm at x_i in direction y we obtain a l_i(y)>0 such that x_i+y l_i(y)∈∂ C_i∪∂ C_j for some j and we have the continuous bijection ϕ_i: S^n-1→∂ C_i, ϕ_i(y) = x_i + y l_i(y) . We can compute the size of the infinitesimal area element hit by the ray in direction y based on its distance to x_i and its angle to _ij=x_j-x_i/|x_j-x_i|. According to the changes of variables formula the ratio between the boundary and sphere area in direction y is given by A/ S (y) = l_i^d-1(y) |_ij· y|^-1 Where A resp. S represent the surface measure of the cell boundary resp. the sphere. This ratio allows us to estimate the surface integral of a function f by Monte Carlo estimation with N samples (y_j)_j=1,…,N∼𝒰(^d-1) via ∫_∂ C_i f(y) A(y) = ∫_^d-1 f(ϕ_i(y)) A/ S(y) S(y) ≈∑_j=1^NS_d-1/N A/ S(y_j) f(ϕ_i(y_j)) where S_d-1 = 2π ^d/2/Γ (d/2) is the surface area of the sphere ^d-1. Note that from the procedure we also obtain the neighbouring cell C_j such that this sum naturally splits into the contributions of the different faces C_i∪ C_j actually estimating the individual integrals ∫_∂ C_i f dA. This method naturally generalizes to volume integration by additionally sampling along each ray. Similar to before, let (y_j,t_j) ∼𝒰(^d-1×(0,1)). Then ψ_i: ^d-1× (0,1) → V_i ψ_i(y, t) = x_i + t y l_i(y) is a bijection between our sample space and C_i∖{x_i}. With the change of variables formula and Vol_d=S_d-1/d for the spheres volume we derive the infinitesimal volume m_i(y, t) = S_d-1/d t^d-1 l_i(y)^d so that ∫_V_i dV(x) = ∫_^d-1× [0,1] m_i(y, t) y t ≈∑_j=1^NS_d-1/dN l_i(y_j)^d ∫_V_i f(x) dV(x) = ∫_^d-1× [0,1] f(ψ_i(y, t)) m_i(y, t) y t ≈∑_j=1^NS_d-1/dN f(ψ_i(y_j, t_j)) t_j^d-1 l_i(y_j)^d We provide the a routine computing area, volume, surface and volume integrals alltogether in Algorithm <ref>. Note that depending on the cost of evaluation f relative to the cost of the calls the evaluation of the Monte-Carlo integrals can be adjusted. If is costly we can make use of a single ray for multiple volume integral evaluations (as done in the suggested algorithm). If on the other hand f is costly we can integrate f with few samples but reweight the resulting integral with more samples estimating only the area/volume. For example for the area we have ∫_∂ C_i dA ≈ F_δA^*/A where A^* is a more precise estimate of the area obtained from more samples without f evaluations. Note further that this method does not need prior knowledge of the Voronoi diagram since it relies on the method only and hence is applicable even to very high dimensions. In the case where the whole Voronoi diagram is already pre-computed one can speed up the calls by running its nearest neighbour search over the known neighbours only. §.§ Leibnitz Integration The basic idea is to recursively decompose the volume of a convex d-dimensional polytope into pyramids over a lower-dimensional base surfaces. To be more concrete, consider a covex polytope P with x in the interior of P. If X_P denotes the vertices of the polytope, we could take e.g. the centroid x=|X_P|^-1∑_y∈ X_Py. However, if P is the Voronoi cell of x_i∈ X we simply chose x=x_i. We then observe that the boundary of P can be decomposed into K different d-1 dimensional polytopes (P̃_i^1)_i=1,… K and the volume of P decomposes into K pyramids with base P̃^1_i and apex x and volume 1/d|P̃_i^1| dist(x,P̃_i^1). On the other hand, we can calculate each d-1 dimensional mass |P̃_i^1| in terms of its centroid x_(i)^1 and its d-2 dimensional ”boundaries” P̃_j^2. This can be iterated up to the d-(d-1) dimensional P̃^d-1_k that is simply given by the distance of two vertices that span one edge. That said, we can associate with the vertices r_1 and r_2 of the final edge as well as x_(i)^1,…,x_(j)^d-2 a d-dimensional tetrahedron and the volume of P is simply the sum of all the volumes of these tetrahedrons by the above iterative argument. We calculate the volumes of these tetrahedrons using the relation between volume and a determinant. However, the cost of calculating determinants grows with d!· d. The following approach of iterating the Leibnitz rule will lower these costs to a bit more than 2d^2 (we do not provide explicit value here), i.e. much smaller than d!· d for d>2. §.§.§ The Leibnitz rule Suppose _1,…_d∈ and consider =(_1,…,_d) be the matrix with columns _i. If linearly independent, the vectors _k define a paralellotope with volume ||. Moreover the d-dimensional pyramid with apex 0 and the base defined by the vectors _d has volume 1/d!||. Given h, the distance of 0 to the plane defined by _1,…_d, we find that the d-1-dimensional area defined by _1,…_d is 1/h(d-1)!||. In particular, calculating the exact volume of such a pyramid or the area of the base boils down to a calculation of of . We denote _1,j the submatrix of where the first column and the j-th row have been deleted. Then we find according to the Leibnitz rule: =∑_j (-1)^1+j_1,j . Now, if we write T^d_k={{j_1,…,j_k}⊂{1,…,d}} for the set of ordered subsets of {1,…,d} with precisely k elements, we define for k∈{1,…,d} and τ∈ T^d_k the matrix _k,τ which emerges from erasing the first k columns and the rows τ. Then _k,τ=∑_j=1^d (-1)^1+j_k+1,τ∪{j} , _d, τ=(_d)_j∉τ . Based on the above observations we interpret as a data set storing the so called minors _k,τ. It is important to observe that _k,τ depends only on _k+1,…,_d. In particular we may formulate the following algorithm: Suppose (,_j,j) has been called from j=d downto j=2 and for _2,…_d we finally obtain =(,_1,1) §.§.§ Integration of a piecewise linearly interpolated function We now show how the volume integral of a linear function over a cone can be decomposed into a lower dimensional integral over a base surface and a pointwise evaluation at its apex. Let B̃⊂^d-1 – identifying it with B=B̃×{0}⊂ – and let x_0=(0,…,0,1)∈. Let f̃: B̃→ be continuous, let f_0∈ and write x=(x̃, x_d) for x∈, x̃∈^d-1, x_d∈. Let C=conv(B∪{x_0})⊂ be the cone with base B and apex x_0 and define f:C→ by f(x̃,x_d)= (1-x_d)f̃(x̃)+x_d f_0, the linear interpolation between the values of f̃ on B and the value f_0 in x_0. Then we find ∫_C f x = ∫_0^1∫_(1-x_d)B̃ (1-x_d)f̃(x̃) +x_df_0 x̃ x_d = ∫_0^1∫_B̃ (1-x_d)^df̃(x̃) x̃ x_d +∫_0^1|B̃|(1-x_d)^d-1 x_df_0 x̃ x_d = 1/d(d/d+1∫_B̃f̃(x̃) x̃+1/d+1|B̃|f_0) This formula is by its nature iterative over dimensions of subsets as is implemented accordingly in steps 1.(f) and 2.(f) of the algorithm below. In particular, we can apply (<ref>) when we perform the iteration outlined at the beginning of Section <ref> evaluating f_0 in the centroids. By the iterative nature of this leads to the precise integral of a function F that coincides with the original f in all vertices and on x_i as well as on all centroids r_m calculated in 2.(b) (corresponding to x_i^k at the start of the section), and is linearly interpolated in between. The absolute error of the integral on the cell i is bounded from above by |C_i| sup_x∈ C_i|f”(x)| diam(C_i)^2 due to Taylors formula and the relative error is bounded by inf_x∈ C_i|f(x)|^-1 sup_x∈ C_i|f”(x)| diam(C_i)^2 . §.§.§ The integration algorithm Using the data structure we can formulate our integration algorithm. The above observations all accumulate in the following result: Given a complete Voronoi diagram the algorithm (i,,f) yields for each cell C_i the exact volume, as well as the exact area of all interfaces with its neighbors. Furthermore, it provides the exact integral of a function F which is the linear interpolation of f on the vertices, the cell centers x_i and the midpoints r_m calculated in Step 2.(b) of . §.§ Heuristic Monte-Carlo integration We finally have a look at a modified Monte-Carlo integration that is able to save time significantly when it comes to the integration of functions. In this case, we only need to compute the d-1 dimensional interface areas between x_i and its neighboring cells x_j first and infer from these areas the volume via 1/d∑_j 1/2|x_i-x_j| |∂ C_i∩∂ C_j| . We then take all K_ij vertices _ij={ν_1,…,ν_K_ij} belonging to the interface of C_i and C_j to calculate its centroid y_ij=K_ij^-1∑_kν_k like in the Leibnitz approach above. Furthermore, using the computed area A_ij of the interface, we use the following modification of (<ref>) to define an integral of f(x) over A_ij ∫_A_ijf(x) x≈d/d+1K_ij^-1∑_k f(ν_k) A_ij+1/d+1f(y_ij)A_ij . This mimics an integral over a linear interpolation of f between the vertices of interface i,j and the value at the centroid of A_i,j in the same way as explained above. Applying once more (<ref>) we can define an integral over the whole cell. The resulting integral is neither a Monte-Carlo intergral in the sense of Section <ref> nor the exact integral of a linear interpolant like in Section <ref>. However, it does not require exhaustive calculations of determinants in high dimensions (like in Leibnitz), nor does it require that many function evaluations per cell like in Monte-Carlo. Instead, it combines the advantage of both approaches. § COMPUTATIONS AND EVALUATION OF PERFORMANCE In this section we will carry out some numerical experiments demonstrating our techniques in this section. We demonstrate the effectiveness of our incircle approach by contrasting it with the original bisection method. We then show that our exhaustive search matches the performance of the state of the art qHull solver. Finally we compare the performance and accuracy of the proposed integration methods to each other and conclude with a discussion of their domains of application. §.§ Performance of the incircle We evaluate the performance of the previously suggested bisection search <cit.> against the newly proposed incircle (Section <ref>). It's important to keep in mind that the former search method is approximate and is dependent on the accuracy parameter varepsilon, which specifies the acceptable absolute error for the distance t in refeq:t. As a result, in order to obtain results that are feasible for our experiment, we must decrease ε for higher dimensions (because the distances increase). Therefore, we compare the incircle search with and without the proposed initial point heuristic with the bisection search for two different tolerances, ε = 10^-4 or ε = 10^-8. Since the nearest neighbour search consumes the majority of computation time, we evaluate their effectiveness by counting the number of nearest neighbour calls per vertex across various dimensions for 1000 generators distributed uniformly in the unit cube: We see that the incircle search with heuristic improves the number of nearest neighbor calls by a factor of 2.5 - 3.4, decreasing with the dimensionality. The heuristic itself contributes an performance increase of about 10%. We observe that the number of nearest neighbour calls is decreased by a factor of 3.4 - 2.5 using the incircle search with heuristic, slowly decreasing with the dimension. The performance gain from the heuristic alone is roughly 10%. It should be noted that if ε is set too low, errors could accumulate to the point where "spurious vertices" are discovered, impeding the exhaustive search's ability to converge. The number of vertices found in the approximate and deterministic computations was consistently differing across the experiments indicating that even higher tolerances (and hence more nearest neighbour calls) would be necessary to obtain the qualitative exact Voronoi topology. We furthermore compute the raw compute times between the state of the art qHull solver (using the delauney method) as well as both implementations of our algorithms. We do this by measuring the time to generate the voronoi diagram (resp. delauney triangulation) for different number of random generators n=100, ..., 300 000 respectively n=100,...,3000 for dimensions 2 and 6. In Figure <ref> we see that our implementation (VoronoiGraph.jl) is matching qHull in performance. Whilst HighVoronoi is clearly slower it guarantees correct handling of degenerate tessellations and provides more flexibility in terms of boundary conditions etc. The spikes in its runtime are incurred by the special handling of possibly degenrater vertices. To summarize, the incircle search not only provides an exact over an approximate solutions, but does this also at a fraction of the cost and with this new method the proposed exhaustive search matches the performance of established voronoi compututation algorithms. §.§ Performance of the integration routines We will now evaluate the performance and quality of our suggested integration algorithms. For simplicity of presentation we write MC for Monte-Carlo integration, P for Leibnitz integration (P stands for Polygon or Polytope) and HMC for heuristic Monte-Carlo integration. We start with comparing the time needed computing integrals using the respective methods and continue with a discussion of the approximation quality. We used for the calculations, as this package supports the restriction of volume calculation to a unit cube and hence all values for volumes, interfaces or integrals in our simulation are finite. When we write about about we always refer to a file with the corresponding name in that repository. §.§.§ Computation time In we distribute 1000 nodes iid in the unit cube for each dimension d=2 to d=5 and calculates the corresponding Voronoi diagrams. As integrand we choose the function f(x)=sin(x_1^2), which is a rather smooth function yet bearing some some computational cost. * the Monte-Carlo (MC ) and Polygonal (P) volumes and areas * as well as the Monte-Carlo, Polygonal and heuristic Monte-Carlo (HMC) volume and area integrals for f(x). Since the cost of MC strongly depends on the number of rays, n, and (for the integration of a function, eq. (<ref>)) the number of volume subsamples, m, we performed simulations for n=10^3 and n=10^4 rays as well as for m=2 and m=10. The number of rays was also applied to HMC for a meaninful comparison, but we note that the choice of m has no effect on HMC. We restricted the analysis to d≤5 since in d=6 the Leibnitz computation takes already more than 1 hour. As we can see in Figure <ref> the costs of P steeply increase from 4 to 5 dimensions. This is no surprise as the number of edges, which get subdivided to derive the simplicial complex,increases superexponentially with d. As to be expected, the cost of the Monte-Carlo integration scales with their parameters n and m. The HMC function integration builds upon the Monte-Carlo area computation. We see that the HMC integration takes about the same time as the MC area/volume computation, indicating that the additional cost for evaluating f at the vertices is neglible. However, when looking at MC volume we observe a clear increase in cost, which is explained by the huge amount of additional function evaluations n× m. §.§.§ Approximation quality of volumes and volume integrals Note that the Polygonal method P computes the exact volumes of the cells. To the left of Figure <ref> we compare the relative error of the volumes approximated with MC in d=5 dimensions for N=10 000 nodes. Since we have no access to the the true volume integrals, we can only compare the three approximative integration methods to each other. We expect P to be more accurate then HMC in general, since both integrate linear approximations but P uses more interpolation points and exact instead of approximate volumes. We therefore provide the histograms of the relative deviations of the P vs. MC and P vs. HMC to the right of Figure <ref>. It appears that the MC integrals systematically overestimates while the HMC integral underestimates relative to P integration method by around 2%. It is not clear how this happens but it might be due to the shape of the function sin(x^2). We also observe that the over- or underestimation goes in hand with a tail of the distribution on the respective side. §.§.§ Approximation quality of the surface area The expected error for individual interface integrals, i.e. over the surfaces between two cells, is expected to be much higher than the volume volume integrals themselves, given that the rays used to compute a volume integral distribute their contribution over their individual interfaces. For example, when computing a full surface integral with 10 000 rays (samples) the the integral over a interface that covers only 1% of the total surface relies on merely 100 rays. Correspondingly, in this particular case the variance can be expected to be 100 times larger (factor 10 in the standard deviation) according to the law of large numbers. As mentioned above, the P method is exact concerning volume and area (besides machine precision). When it comes to integral computation, it is as exact as a piecewise affine approximation of the original function, i.e. it can be expected to be as precise as 1/3f”(x)diam(C_i)^3 on cell C_i. In the following we compare the variance of the relative errors for the area estimation. The data is computed over 4 different realizations of Voronoi grids for 1 000 random nodes in dimensions d=2,…, 5 and differing number of Monte-Carlo rays, 1 000, 2 000, …, 10 000. We furthermore grouped the faces into 30 bins according to their relative contribution to the total cells surface, I:=|σ_ij|/|∂ C_i|, for 1%≤ I ≤ 30% [See the file test-integration-area-accuracy.jl]. The results are shown in Figure <ref>. We can observe the expected square-root depende of the noise on the number of Monte-Carlo rays. Regardless of the dimension or number of rays in MC, the deviation of the MC area from the P area is large for small area fractions, decreases to a minimum with increasing area fraction and from dimension d≥3 rises again for large area fractions. The minimum of this curve seems to be close to 1/2d which indicates that the Monte-Carlo estimator works best for areas close to a face of a cube. The explanation we have is that small area fractions are hard to be measured precisely by a random algorithm as the probability to have a representative sample is low. On the other hand, high area fraction means that the area is expanding into the ,,outskirt” of the respective piece of surface, the measure becomes unprecise due to the flat angle at which a ray intersects the area. Note however that, depending on the application, the uncertainty of the individual faces area or even their function integrals may average out over patches of neighbouring cells. §.§.§ Approximation quality of surface integrals In order to compare MC, HMC and P integrals over interfaces we follow a similar strategy as for the areas. Using the code in we collect data for 1 000 nodes in dimensions 1–5 for two mutual methods on the same Voronoi grid. We then compare the integral values I_1 and I_2 of both methods on each interface and determine the deviation as 1+2I_1-I_2/I_1+I_2. The quality of interface integrals of MC and HMC methods decreases with the dimension due to the increase of interfaces per cell according to Table <ref> above, resulting in less rays per interface and decreasing resolution. Hence in our presentation and analysis we focus on d=5. In Figure <ref> we provide the graph for an surface integral fraction of 7% in d=5 for MC vs. HMC, MC vs. P and HMC vs. P. We observe that the profiles do not differ significantly. Hence we can conclude that they more or less provide the same quality of approximation of the true integral value. For smaller interface integral fraction, we however observe a difference. In Figure <ref> we provide the same comparison for a percentage of 1% or less and find that HMC and MC compare the same way to P but the difference between HMC and MC as significantly worse mean deviation σ. We note at this place that the package makes it possible to take a MC integration and calculate HMC data for exact the same volume and interface data. When it comes to higher percentage of integral values per interface, we point out that Figure <ref> indicates that the low number of samples in this case makes data unreliable. §.§ Comparison of the suggested methods Of the three suggested integration methods, Monte-Carlo (MC), Polygonal (P) and heuristic Monte-Carlo (HMC), only P is able to compute the areas and volumes exact. It furthermore allows to exactly integrate linear interpolations to a function f. The interpolation nodes are the vertices of the simplicial complex generated by the Voronoi, i.e. even finer then the Voronoi Diagram itself and the error scales with the cell size via 1/3f”(x)diam(C_i)^3. This allows for good accuracy but scales extremely bad with the dimension. On the other hand we have MC, which itself scales independently of the dimension but does not use any additional structure of the function f and therefore is expected to provide less accurate estimations for the same number of f evaluations. However, it is known that the error scales with the number of samples N via 1/√(N). Finally there is the HMC which estimates the areas/volume via Monte-Carlo but averages over the function evaluations of the vertices, mimicking the averaging of P over the simplices. We expect this to be more efficient then MC and P for medium dimensions, where P is unfeasable but the number of vertices is still low enough to benefit from the interpolation-like aspects. However we have to admit that the averaging is merely heuristically motivated and we don't know of any precise error estimates. We summarize this in the following table § CONCLUSION The fundamental contribution of this paper is the new incircle algorithm (Algorithm <ref>). It improves on the original bisection <cit.> not only by providing the exact position of the vertices, but also reduces the required number of nearest neighbour calls that make up the major cost of the computation by a factor of 3. Building on this we furthermore introduce a new exhaustive search that allows the computation of the full and exact Voronoi diagram in medium dimensions, matching or even exceeding the performance of the state of the art algorithm qHull. We furthermore introduced three different integration methods. The most general is a pure Monte-Carlo method using the to approximate surface and volume integrals even in very high dimensions. Then we have a polygonal Leibnitz rule based method that subdividies the cells into simplices and gives exact results for areas, volumes and linear interpolants but is limited to low dimensions. We combine these approaches into a heuristic Monte-Carlo which allows to integrate by function evaluation on the vertices and Monte-Carlo area/volume approximation suitable in medium dimensions. § ACKNOWLEDGEMENTS This research has been funded by Deutsche Forschungsgemeinschaft (DFG) through grant CRC 1114 "Scaling Cascades in Complex Systems", Project Number 235221301, Projects B05 "Origin of scaling cascades in protein dynamics" and C05 "Effective models for interfaces with many scales". § APPENDIX: ALGORITHMS
http://arxiv.org/abs/2405.08879v1
20240514180039
A Diffused Background from Axion-like Particles in the Microwave Sky
[ "Harsh Mehta", "Suvodip Mukherjee" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph", "hep-th" ]
The nature of dark matter is an unsolved cosmological problem and axions are one of the weakly interacting cold dark matter candidates. Axions or ALPs (Axion-like particles) are pseudo-scalar bosons predicted by beyond-standard model theories. The weak coupling of ALPs with photons leads to the conversion of CMB photons to ALPs in the presence of a transverse magnetic field. If they have the same mass as the effective mass of a photon in a plasma, the resonant conversion would cause a polarized spectral distortion leading to temperature fluctuations with the distortion spectrum. The probability of resonant conversion depends on the properties of the cluster such as the magnetic field, electron density, and its redshift. We show that this kind of conversion can happen in numerous unresolved galaxy clusters up to high redshifts, which will lead to a diffused polarised anisotropy signal in the microwave sky. The spectrum of the signal and its shape in the angular scale will be different from the lensed CMB polarization signal. This new polarised distortion spectrum will be correlated with the distribution of clusters in the universe and hence, with the large-scale structure. The spectrum can then be probed using its spectral and spatial variation with respect to the CMB and various foregrounds. An SNR of ∼ 4.36 and ∼ 93.87 are possible in the CMB-S4 145 GHz band and CMB-HD 150 GHz band respectively for a photon-ALPs coupling strength of g_a γ = 10^-12 GeV^-1 using galaxy clusters beyond redshift z = 1. The same signal would lead to additional RMS fluctuations of ∼7.5 × 10^-2 μ K at 145 GHz. In the absence of any signal, future CMB experiments such as Simons Observatory (SO), CMB-S4, and CMB-HD can put constraints on the coupling strength better than current bounds from particle physics experiment CERN Axion Solar Telescope (CAST). Language-Guided Self-Supervised Video Summarization Using Text Semantic Matching Considering the Diversity of the Video Toshihiko Yamasaki May 20, 2024 ======================================================================================================================= § INTRODUCTION The Cosmic Microwave Background (CMB) is the primordial radiation that surrounds us and is a remnant of the hot Big Bang in the early universe. The initially tightly coupled radiation and baryons were decoupled due to the expansion of the universe. When the optical depth at the time of decoupling was significantly lowered, the photons were able to travel large distances and left the photon-baryon fluid at the last scattering surface at a redshift z ≈ 1089 <cit.>. These photons are now observed as the CMB and are highly redshifted with a monopole temperature of 2.7255 K <cit.>. Many processes affect the CMB at higher multipoles like Doppler shift, lensing, scattering, etc. The power spectrum of the CMB is very well known from the correlation between temperature and polarization power spectra <cit.>. Cosmic Background Explorer (COBE), Wilkinson Microwave Anisotropy Probe (WMAP) and Planck have provided invaluable information about the CMB and our universe. Several upcoming experiments such as Simons Observatory (SO)<cit.>, CMB-S4<cit.>, CMB-HD <cit.> will be making even higher resolution CMB observations in the coming years, with an emphasis on probing the polarized CMB and spectral distortions, which refer to the departures from Planck black-body spectrum <cit.>. The CMB photons as they pass through galaxy clusters, can undergo conversions to axion-like particles (ALPs) if ALPs exist in the universe, irrespective of whether they constitute a fraction of dark matter. These conversions can be resonant or non-resonant and are well studied <cit.>. The resonant conversions are dominant and require the effective masses of photon and ALP to be equal. This sets the stage for the probability of conversion that depends on the magnetic field and electron density profile, as well as the redshift of the cluster. The conversion leads to a new type of polarized spectral distortion in the CMB. With the ongoing and upcoming experiments, we will be able to gain further insight into the anisotropies and distortions in the CMB. In this paper, we study the capabilities of these experiments in being able to probe this ALP distortion signal from unresolved galaxy clusters. A multi-band approach can be used to constrain the ALP coupling constant from clusters that are resolved in multiple frequency observations. Radio, X-rays and optical surveys can provide information about the cluster magnetic field, electron density profiles and their redshifts respectively <cit.>. The ALP distortion signal can then be probed either using the power spectrum of the region around the cluster or via a pixel signal-based approach. These will be considered in a follow-up analysis <cit.>. The bounds on coupling constants may be independently revisited using the analysis for resolved clusters <cit.> or from unresolved ones, which we deal with in this work. The unresolved clusters refer to the clusters that are not resolvable in the required electromagnetic (EM) bands (such as radio, microwave, optical, and X-ray) from which information about the electron density, magnetic field, and redshifts can be inferred. These clusters need to be resolvable also in the microwave region of the EM spectrum though, so that polarization information is measured using CMB experiments. Most of the contribution to the diffused background comes from the high redshift z>1 clusters for which there is a CMB measurement of the polarization signal, but no information in radio, optical, and X-ray to know the astrophysical properties and source redshift of the galaxy cluster. These signals originating from these high redshift clusters will contribute to an ALP background signal in CMB across the sky. In this work, we show that a new kind of CMB polarised fluctuation can appear from unresolved galaxy clusters. We have studied a halo-based ALP power spectrum model is this analysis applicable to different masses of ALPs. In a halo model, the dark matter halos are biased tracers of the matter field in the universe. The galaxy clusters are embedded in these dark matter halos. These clusters are sites of photon-ALP resonant conversions. Thus, the large-scale structure of the universe provides a way of probing this ALP signal from unresolved clusters, using correlations between the signals at different locations in the sky. This correlation attributes its origin to the matter distribution in the universe. This correlation can be modeled to be within the cluster (one-halo term) or in two different clusters (two-halo term). The one-halo term or the Poissonian component dominates at smaller angular scales, while the two-halo term or the clustering component contributes at large angular scales and may or may not dominate the Poissonian component<cit.>. The ALP background signal will depend on the ALP coupling constant, frequency of observation, ALP masses, cluster distribution in the universe, etc. The ALP power spectrum will also depend on the electron density and magnetic field profiles of clusters. These will be related to the masses of the clusters and their evolution at different redshifts. Thus, an understanding of the astrophysical evolution of various mass galaxy clusters at different redshifts will provide better constraints on this background spectrum, but that hasn't been considered in this analysis. The CMB temperature fluctuations are contaminated by foregrounds, especially from the galactic plane (like dust and synchrotron emissions). All these (including the CMB) are contaminants to the ALP-distortion signal. Based on the spatial and frequency information of the power spectrum of each component, the ALP background power spectrum can be estimated. Cleaning can improve the SNR by removing the effect of these contaminants. Not only does the signal-to-noise ratio (SNR) depend on the contamination from CMB and foregrounds, but also on the instrument beam and noise. The impact of different cleaning techniques like template matching and Interior Linear Combination (ILC) <cit.> in reducing the effect of foregrounds and CMB on this background signal and improving its detectability has been studied. The motivation behind searching for the diffused ALP signal is highlighted in Sec. <ref>, followed by the CMB photon to ALP resonant conversion in galaxy clusters in Sec. <ref>. The ALP background, along with achievable SNR in various experiments after cleaning are analyzed in <ref>. The estimator for the ALP power spectrum and the related covariance is mentioned in Sec. <ref>, followed by a spectral and comparison of ALP diffused spectrum with CMB and foregrounds in Sec. <ref> . The constraints obtained on ALP coupling constant using ILC are mentioned in Sec. <ref>. Sec. <ref> summarizes the need and the techniques that can be used to increase the detectability of this faint background signal. The power spectrum of the ALP signal produced in a single cluster is studied in Appendix <ref>. The variation of ALP spectrum due to contribution for very high redshifts is explained in Appendix <ref>. The derivation for the map-based power spectrum estimator and the bounds on coupling constant using the template matching of foregrounds is provided in Appendix <ref> and <ref> respectively. We have used natural units (ħ = 1, c = 1, k_B = 1) in most places, until explicitly mentioned. We have used the cosmological parameters from Planck 2015 results <cit.>. § MOTIVATION The ALP signal which originates from photon-ALP resonant conversion in galaxy clusters will lead to polarized distortions in the CMB at low angular scales. If these clusters are at low redshifts and resolvable in multiple EM bands, the polarization information along the line of sight can be modelled to obtain bounds on the weak coupling ALPs may be having with photons. There will also be clusters at high redshifts which will lead to polarized ALP distortions in the CMB. These clusters may not be well resolved in multiple EM bands and will lead to a kind of diffused signal across the sky with a unique spectrum in both spectral (ν) and angular frequencies or multipoles (ℓ). The power spectrum of this ALP background signal would dominate over the CMB power spectrum at low angular scales or high multipoles, and also increase with frequency of observation in the radio and microwave regions of the EM spectrum. This distinctive property of the polarized spectrum is a signature to detect an ALP diffused background. To obtain bounds on the background spectrum, a comparison between the observed power spectrum of the microwave sky and that of the fiducial (no ALPs case) power spectrum is required. The background spectrum will contribute to the residual of the two spectrums. Using the covariance for the observed sky, one can obtain bounds on the diffused ALP spectrum from unresolved clusters. The CMB photons that had scattered off the last scattering surface pass through the matter cosmic web (see Fig.<ref>). The matter power spectrum contains information about the matter density field at various scales. The matter field consists of dark matter halos, which are hosts to galaxy clusters. The clusters which are not resolved in one or some of the frequency bands or are at high redshifts, are unresolved and signals from them cannot be individually separated. The ALP distortion signal is obtained from all these clusters as the photon-ALP conversion takes place in the presence of cluster magnetic fields. These signals from unresolved clusters at various redshifts are integrated along the line of sight and produce a faint polarized ALP distortion background signal. Using halo modeling of these clusters, one can use the halo distribution to obtain constraints on the ALP background power spectrum. With the upcoming high-resolution CMB experiments such as the SO, CMB-S4, and the proposed CMB-HD, we will be able to estimate the ALP power spectrum, based on its frequency and spatial dependence, which is very distinct as compared to the spectra from other phenomena, to obtain constraints on the photon-ALP coupling constant g_aγ. § CMB PHOTON - ALP RESONANT CONVERSION IN GALAXY CLUSTERS The CMB black-body radiation peaks at 160.2 GHz, with very tiny fluctuations. Also, it is anisotropic with spatial fluctuations in temperature of the order of 10^-5. Only about ∼ 5% of the CMB photons are linearly polarized. Also, there are spectral distortions in the CMB, which represent its deviations from a black-body spectrum, due to the absorption or emission of photons at different frequencies. These arise due to phenomena like the μ and y distortions. Earlier the photon-baryon fluid was efficiently thermalized via processes like the Compton, double Compton and bremsstrahlung <cit.>. Energy release via particle-decays and primordial black hole evaporation gets redistributed. When these processes that redistribute the energy and photons start becoming inefficient due to the Hubble expansion, distortions start setting in, which can be probed to study the early universe physics <cit.>. The CMB polarization spectrum is mainly affected by Thomson scattering and gravitational lensing, and also exhibits an independence from the frequency of observation <cit.>. There are also numerous secondary anisotropies that generate temperature and polarization fluctuation in the CMB. The anisotropies introduced due to galaxy clusters are secondary as they are generated after the epoch of recombination. These anisotropies can be probed either using the temperature intensity fluctuations in the CMB, or using fluctuations in its polarization. These include lensing <cit.>, thermal Sunyaev Zeldovich (SZ) effect <cit.>, kinetic SZ <cit.>, etc. In this work, we look at another secondary anisotropy, that owes its origin to galaxy clusters, and will be generated in the CMB if ALPs exist in nature. These are caused by the conversion of CMB photons to ALPs in the presence of the magnetic fields of galaxy clusters. These conversions are frequency-dependent and lead to spectral distortions in the CMB. This conversion takes place in the presence of magnetic fields in astrophysical systems such as galaxy clusters and voids. We mainly focus on distortion from galaxy clusters in this analysis. The interaction between ALPs and photons is given by the following Lagrangian <cit.>: ℒ_int = -g_a γ F_μνF̃^μν a/4 = g_a γ E · B_ext a. This interaction introduces a polarized distortion in the CMB as an ALP is formed. After a conversion, the photon gets polarized perpendicular to the magnetic field direction at the resonant location. These conversions will be dominated by resonant conversions which satisfy the condition <cit.>: m_a = m_γ = ħω_p/c^2≈ħ/c^2√(n_e e^2 / m_e ϵ_0), here ω_p is the plasma frequency and n_e is the electron density at the location. So, the electron density at a location in the cluster determines the ALP mass that can be formed at that location. For a spherically symmetric electron density profile, ALPs of a particular mass will be formed in a spherical shell in the cluster. This shell will be projected as a disk in 2D. Higher mass ALPs shall be formed near the center of the cluster with a disk of lower angular size as compared to the lower mass ALPs. The ALPs being probed are in the mass range 10^-15 - 10^-11 eV, which depends on the electron density of the cluster. The dispersion relation for photon-ALP conversion is given as (here B_t refers to the transverse magnetic field): 2ω (ω - k) = - ω(Δ_e + Δ_a) ±ωΔ_osc = m_a^2 + m_γ^2/2±[ ( m_a^2 - m_γ^2/2)^2 + ω^2 g_aγ^2 B_t^2 ]^1/2, where the parameters are defined as: Δ_e ≈ω_p^2 / 2ω , Δ_a = - m_a^2 / 2ω, Δ_aγ =g_aγB_t / 2 , Δ_osc^2 = (Δ_a - Δ_e)^2 + 4Δ_aγ^2 . This dispersion relation comprises two dispersion branches. The probability of conversion is related to the probability of a shift from one dispersion branch to the other. It is quantified using the adiabaticity parameter, which compares the scale over which electron density varies to the oscillation scale over which conversion can occur: γ_ad = Δ_osc^2/|∇Δ_e| = | 2g_aγ^2 B_t^2 ν (1 + z)/∇ω_p^2|. If the resonant condition is satisfied (m_a = m_γ), in the adiabatic case (γ_ad >> 1), the photon-ALP conversion will definitely occur. In the non-adiabatic case (γ_ad << 1), the conversion will be probabilistic with the probability given as 1 - p, where p refers to the probability of branch shift when passing from a high density to low density region or vice-versa. This is given as <cit.>: p = e^-πγ_ad/2. There will be a probability related to the photon-ALP conversion at any location of the cluster with the mass depending on the electron density at that location. There will always be two resonances for a CMB photon while propagating in and out of the cluster due to change in electron density (low → high and then high → low ) <cit.>. For the ultimate product of the two resonances to be an ALP, the conversion should take place at only one of the resonances. This probability is found to be <cit.> P(γ→ a) = 2p(1 - p) = 2e^-πγ_ad / 2(1 - e^-πγ_ad / 2), where 1-p is the probability of conversion at the resonant location. In the non-adiabatic limit (γ_ad << 1), this is approximated as: P(γ→ a) ≈πγ_ad . The change in intensity due to this conversion in the CMB is given as: Δ I_ν^α = P(γ→ a) I_cmb(ν) ≈πγ_ad( 2h ν^3/c^2) 1/e^hν /kT_cmb - 1. § DIFFUSED AXION POWER SPECTRUM: A NEW SIGNAL IN THE POLARIZATION SKY OF CMB The matter field in our universe comprises dark matter halos, which are hosts to galaxy clusters, which themselves are hosts to galaxies. Any diffused signal that owes its origin to these matter overdensities can be studied using modeling of the distribution of these overdensity regions. In the halo model, halos are descriptors of the nonlinear matter density of the universe <cit.>. These halos represent regions of overdensity in the matter density field, while the voids point to under-density. As we are interested in the diffused background from ALPs, we consider the halo approach to calculate the signal from high-density regions of the large-scale matter distribution in the Universe. §.§ Halo modeling of matter in the universe The matter power spectrum contains information about the 3D distribution of matter. It defines the matter density field at different scales. It is well described by a linear theory at large scales, while higher-order non-linear statistics are required to describe the gravitational collapsed systems at small scales, which calls for a halo modeling of the matter field in the universe. We have used the Λ CDM cosmological model to calculate the matter power spectrum using CAMB <cit.>. The perturbations in the matter density field can be written as ρ_m(x) = [1 + δ_m(x)] ρ̅_̅m̅, with ρ̅_̅m̅ being the mean matter density. The dark matter is the dominant constituent of the matter density in the universe. The distribution of these halos can be assumed to follow the matter distribution in the universe. This gives the matter density field from a superposition of halos 'i' of masses M_i( <cit.>): ρ_m(x) = ∑_iρ_h(|x - x_i|,M_i), with ρ_h being the density profile of the halo. These halos are assumed to only be interacting gravitationally and hence, their properties only depend on their masses. These are assumed to have undergone spherical collapse and subsequent virialization. Their masses are typically defined as the mass within the radius at which the density of the halo becomes about 200 times the critical density of the universe ρ_c. The profiles start steepening beyond this radius. The perturbations in the matter density field can be expressed in the halo model as: δ_m (x) = ∫ d ln M M/ρ_mdn/d ln M∫ d^3 x' δ_h(x',M)y(|x - x'|,M), where δ_h accounts for the variations in the mass function, i.e., dn(x)/dM = [1 + δ_h(x,M)]dn/dM. The distribution of dark matter halos with halo mass and redshift is referred to as the halo mass function. A general mass function is of the form dn/dM = f(σ)ρ_c Ω_m/Md ln σ^-1/dM. Here Ω_m is the matter density parameter. The σ represents the RMS deviation in the initial density fluctuation field smoothed with a tophat filter, and f(σ) is the halo multiplicity function. The Tinker mass function <cit.> uses the multiplicity function with four free parameters (d = 1.97,e=1.00, f =0.51,g = 1.228) and a normalization (B = 0.482) and the values are set to those for M_200m. The multiplicity function reads f(σ) = B[ (σ/e)^-d + σ^-f] exp(-g/σ^2) . The halo mass function decreases with increasing redshifts and increasing masses, which leads to more heavier clusters at lower redshifts. This is reflected in the probability of clusters of higher masses being low as compared to those of lower mass in the redshift range of 0.5 to 7. Also, the slope becomes steeper with increasing redshift at higher masses. This indicates that the proportion of high to low-mass clusters is greater at low redshifts as compared to higher ones. In the halo model, the non-linear matter power spectrum can be written as the sum of the contributions from one and two-halo terms, i.e., P(k) = P_1h(k) + P_2h(k). The mass elements in a single halo are accounted for by the one-halo term, while the clustering information is contained in the two-halo term. The two-halo term dominates at low angular scales, while the one-halo term may dominate at high angular scales. The correlation function quantifies the excess probability of finding two halos separated by some distance with respect to the Poissonian probability in the case of random uniform distribution. It is given as <cit.>: ξ_mm(| x_1 - x_2|) = ⟨δ_m( x_1)δ_m(x_2) ⟩. The spatial matter-matter correlation function is given as the Fourier transform of the power spectrum, i.e., ξ_mm(r) = 1/2π^2∫ dk k^2 P(k) sin(kr)/kr. The galaxy clusters are embedded in these dark matter halos. The distribution of these halos traces that of matter. This distribution is related using the linear bias (adopted from <cit.>). connecting the halo-matter and matter-matter correlation functions: ξ_hm(z,r) = b(z) ξ_mm(z,r). §.§ Halo Modelling of photon-ALP Resonant conversion Power Spectrum The CMB photon-ALP resonant conversion for unresolved clusters can be modelled with the distribution of galaxy clusters at high redshifts. The clusters can be considered to be halos of low masses (10^13 - 7 × 10^15 M_⊙), modulus a cluster bias factor taking into account the astrophysics of these clusters. We use the matter power spectrum from CAMB and the Tinker mass function for our analysis. The ALP signal at a location within the cluster will be correlated with the signal for locations within the cluster (due to the finite probability of resonant conversion). Similar to the case of SZ power spectrum (see <cit.>), the one-halo term represents the Poissonian component of the power spectrum and is given as: C_ℓ,1h^ax = ∫_z_min^z_max dz dV_c/dz∫_M_min^M_max dM dn(M,z)/dM | α_ℓ (M,z)| ^2 . Here dV_c/dz is the differential comoving volume and α_ℓ is the angular harmonic transform of fluctuations (refer to Appendix <ref>) due to the photon-ALP resonant conversion in a single cluster <cit.>. α_ℓ's depend on the electron density and magnetic field profiles of the clusters. The inference of α_ℓ's and its dependency on frequency, coupling constant, and cluster profiles is explained in Appendix <ref> when ALPs of all masses in the range 10^-15 - 10^-11 eV are assumed to be generated in galaxy clusters. The integrals take into account the total number of clusters of various masses at different redshifts which lie in the cluster mass range (M_min to M_max) and the redshift range being considered (z_min to z_max). This contribution to the ALP power spectrum will be present even if there is no clustering effect due to gravitational attraction. The ALP signal will also be correlated with the signal at locations outside its cluster. This is taken into account by the two-halo term (where we use the limber approximation applicable at l > 20) given as: C_ℓ,2h^ax = ∫_z_min^z_max dz dV_c/dz P_m ( k = ℓ +1/2/r(z),z ) ×[ ∫_M_min^M_max dM dn(M,z)/dM b(M,z) α_ℓ (M,z) ]^2. Here r(z) is the comoving distance at redshift z. The two-point halo correlation function has been expressed in terms of the matter power spectrum as P_h(k,M_1,M_2,z) = b(M_1,z)b(M_2,z)P_m(k,z), which arises due to the clustering between the halos. There will be several clusters in a certain mass interval within a given redshift interval. We have considered random electron density and magnetic field profiles for the galaxy clusters and the median of their |α_ℓ(M,z)|^2's as the contribution of these clusters to the ALP background spectrum. In principle, these single cluster power spectrums (|α_ℓ (M,z)|^2's) depend on the evolution of clusters based on their masses and redshifts. This in turn would allow one to relate the cluster electron density and magnetic field profiles with their masses at different redshifts and model the single cluster contribution to the power spectrum well. The single cluster power spectrums (|α_ℓ(M,z)|^2's) will also depend on the range of ALP masses that are being considered to be produced in the cluster and the variation of coupling constants with the ALP masses. We consider the case where ALPs of all masses in the range 10^-15 - 10^-11 eV are produced in the galaxy clusters with the corresponding probabilities if the resonant condition is satisfied (m_a = m_γ). The coupling constant has been assumed to be uniform for all ALP masses in this range (g_aγ = 10^-12 GeV^-1). We have considered the ALP background signal to have its origin in clusters of masses 10^13 - 10^15 M_⊙. We create various mass bins in this range. We simulate various mass binned clusters in redshift bins from z = 0.5 to z = 7, beyond which we believe the background signal won't be affected much by higher redshift clusters. We simulate the quantity |B|^2 / ∇ n_e in clusters and select the median values at various distances from the cluster center to obtain an ALP signal that would serve as a representative for the particular bin. The power spectrum for such a cluster acts as the ALP distortion spectrum |α_ℓ|^2 for that bin and is used in the evaluation of the one and two-halo ALP power spectrums in Equs.<ref> and <ref> respectively. The one-halo term dominates at high multipoles as it considers the cluster interior where the signal is generated, which corresponds to smaller angular scales. So, there are power contributions at high redshifts from various clusters. This prevents the lowering of one-halo power spectrum as it scales as ℓ^0 at high multipoles. The two-halo term increases at low multipoles (ℓ = 20 to 100) as the correlation is high, accompanied by a large number of clusters at low redshifts. With increasing multipoles, it then decreases as the number of clusters starts decreasing significantly with redshift and the halo correlation also decreases. The two-halo term may contribute more than one-halo term for the low-multipole range (20 < ℓ < 200) when M_min is low, as the low masses at low redshifts contribute significantly to the halo-halo correlation. Hence, the ALP spectrum shape will depend on the strength of the signal at low multipoles, which will be determined by the number of clusters contributing and their individual contributions, which themselves will depend on the cluster masses and profiles. There will be clusters at very high redshifts and with the polarized photon travelling through high and low redshift clusters, it may get depolarized due to turbulence or stochastic effects. Hence, it is mostly the low redshift clusters (z < 3.5), (accompanied by the fact that they are in large numbers) that mainly contribute to the polarized ALP diffused signal. The polarization information will be lost as well if the beam size of the instrument is higher than the angular scale of the cluster on the sky. The CMB is very smooth at very high multipoles (ℓ > 4000), with low spatial fluctuations. If there is photon-ALP conversion in unresolved clusters, it will create a diffused background of the ALP signal which will cause additional fluctuations in the CMB at low angular scales or high multipoles (see Fig.<ref>). For a coupling constant of g_aγ = 10^-12 GeV^-1, the additional RMS fluctuations would be of the order of 7.5 × 10^-2 μ K at 145 GHz. The effect on the observed power spectrum is seen at high multipoles (ℓ > 4000) where the CMB-only map shows a dampening of power with increase in multipoles, while the power increases in the case of the CMB+ALPs map. The one-halo and two-halo power spectrums at 145 GHz are shown in Fig.<ref>. These spectrums correspond to a minimum redshift z_min = 1. The grey lines represent the noise power spectrum N_ℓ corresponding to the 140 - 150 GHz band for various detectors (SO, CMB-S4 and CMB-HD) as these are the bands with higher sensitivities compared to the ones at higher and lower frequencies. The increase in one-halo at low multipoles sets the maximum angular scale up to which ALPs are formed. At high multipoles, the shape of the ALP power spectrum will be independent of the strength and scale almost as ℓ^0 (D_ℓ varies as ℓ^2) following the one-halo contribution. Here we have considered a randomly uniform orientation of the magnetic field at various locations in a cluster. This gives the ℓ^2 dependence to D_ℓ^αα at high multipoles. In principle, the variation of individual |α_ℓ|'s at high multipoles will depend on the magnetic field orientation within the cluster. The individual variations due to these |α_ℓ|'s will be suppressed as many clusters are integrated along the line of sight. The polarization information for a photon travelling from high redshift clusters may get lost due to depolarization from turbulence and stochastic effects in galaxy clusters. Thus, the low redshift clusters will impact the shape of the background spectrum the most as these are the clusters for which the polarization signal can be resolved. The ALP background spectrum will be correlated with the synchrotron background spectrum from galaxy clusters. §.§ The differences of the ALP power spectrum from other polarized cosmological signals. The lensed CMB polarization power spectrum D_ℓ^EE is independent of the frequency of observation. Also, its dependence on multipoles is well-known from the correlation between temperature and polarization spectrums. Other effects that induce polarization in the CMB include reionization and the polarized SZ effect. The reionization power spectrum decreases with an increase in frequency and also weakens at high multipoles. The polarized SZ power spectrum increases with multipoles but its strength is weak with fluctuations of the order of 10^-8 K <cit.>. The ALP power spectrum strength on the other hand increases with frequency in the radio and microwave regimes of the electromagnetic spectrum. The two-halo term may dominate at low multipoles (20 < ℓ < 200), while the one-halo term dominates at high multipoles. The high multipoles can be used to probe the ALP signal from the damped CMB power spectrum. The spectrum takes into account the ALP signals generated in various mass clusters, integrated over different redshifts. The individual features of the signal are thus suppressed and the power spectrum does not show spikes or high oscillations. This is in contrast to the CMB at low multipoles. For a low z_min, this characteristic may not hold as resolved and well-luminous clusters may be contributing to the background. For the mass range 10^13 to 7 × 10^15 M_⊙, redshift range 1 to 7, and a coupling constant g_aγ = 10^-12 GeV^-1, the one and two-halo contributions are shown in Fig. <ref>. Here g_12 is dimensionless and is defined as g_12≡ g_aγ× 10^12 GeV. The ALP power spectrum crosses the CMB power spectrum at a lower multipole value for higher frequencies and vice versa. These spatial and spectral variations with respect to other polarized signals can be used to probe the ALP background spectrum. §.§ Sources of variation of ALP diffused spectrum due to cosmological factors The background spectrum owes its origin to the unresolved clusters of various masses at different redshifts. Also, the coupling and masses of ALPs will determine the amplitude of this diffused spectrum. Variation with minimum redshift: With a lower minimum redshift (z_min), the background power spectrum increases (Fig.<ref>), while it decreases with a higher minimum redshift. It is mainly the low redshift clusters that contribute to the ALP background signal as these are large in number and have their polarization signals intact. For the high redshift clusters, depolarization of the photons may occur as a result of multiple scatterings. With the upcoming improved detectors, we will be able to resolve clusters at lower redshifts (z < 1). The bright clusters at low redshifts could also significantly affect the shape and strength of the power spectrum, especially at low multipoles. Their contribution to the background spectrum need not be considered and constraints from them can be obtained using the analysis for resolved clusters (explained in a separate work <cit.>). Variation with cluster masses: The strength of the power spectrum also depends on the range of masses being considered. For different halo mass ranges, the spectrum decreases with decreasing mass range (Fig.<ref>). The higher cluster masses contribute to low redshifts. The low cluster masses are less resolved and may contribute even at higher redshifts. The decrease in background spectrum will depend on the contribution of various mass clusters to the ALP background spectrum, which can be analyzed by studying the relation between cluster masses and their electron densities. Since the one-halo power spectrum is about two orders in magnitude higher than the two-halo contribution at the relevant scales corresponding to high multipoles (ℓ > 4000), we will not be plotting the two-halo contributions in any of the upcoming plots. But its contribution in the calculations will be considered. Variation with photon-ALP coupling constant and ALPs masses: The amplitude of the ALP signal is proportional to the square of the coupling constant g_aγ (Eq.<ref>), so the power spectrum scales as g^4_aγ (see Fig. <ref>). This is expected as the higher the coupling, the higher will be the distortion signal. We have considered ALPs of all masses in the range 10^-15 - 10^-11 eV are being produced. The ALP background spectrum will also depend on the ALP masses that may exist in nature. If ALPs of masses only in a certain subrange of the mass range considered, exist in nature, the power spectrum shall decrease. This decrease will be greater if only high mass ALPs exist (10^-13 - 10^-11 eV), while lower if low mass ALPs exist (10^-15 - 10^-13 eV). This is because the single cluster distortion spectrums α_ℓ's are generally higher for low mass ALPs which are formed in the outer regions of clusters with low electron density and high conversion probabilities (see Eq.<ref>). For a variation in coupling constant with ALP mass, signals would be generated in spherical shells in a spherically symmetric cluster. These shells would be visible as different-sized disks, with a larger disk corresponding to low mass ALPs. The signals from these different disks will then need to be summed up to obtain the net ALP distortion signal. §.§ Sources of variation in the ALP Power Spectrum due to cluster astrophysics The ALP background spectrum will depend on the single cluster power spectrums (α_ℓ's), which depend on the astrophysics of the galaxy clusters via their electron density and magnetic field profiles. For all clusters considered, the profile parameters have been assigned random values within the allowed range. In principle, these would depend on the masses of clusters and their evolution at different redshifts. The power spectrum depends on the electron density profiles, magnetic field profiles, the masses of galaxy clusters, and the dependence of electron density on the mass density of galaxy clusters. The effect of change in profiles will be similar to the dependence followed by |α_ℓ|^2, as expected. If ALPs of masses in only a sub-range of what we have considered (10^-15 - 10^-11 eV) are formed, the dependence of the ALP background spectrum on these profile parameters can change, especially in the case of high mass ALPs, following the variation of |α_ℓ|^2. The variation of |α_ℓ|^2 with the mass of ALPs has been considered in a separate work <cit.>. Here we consider the variation of ALP diffused spectrum with a change in electron density and magnetic field strengths. With the mass range of 10^-15 - 10^-11 eV, ALPs can be formed at every location for most of the galaxy clusters. Since the probability of conversion goes as P(γ→ a) ∝1/|∇ n_e|, the signal strength at every location within the cluster will vary as ∝ n_e^-1. Thus the background power spectrum shall vary as D_ℓ∝n_e^-2 (see Fig. <ref>). Since the conversion probability P(γ→ a) ∝ |B|^2, with ALPs being formed at every location, the ALP background power spectrum will vary as D_ℓ∝|B|^4 as shown in Fig.<ref>. A higher magnetic field leads to a higher probability of conversion at all locations in the galaxy cluster. § POWER SPECTRUM ESTIMATION FROM SKY MAPS The presence of one realization of the various signals requires us to define an estimator that takes into account the power spectrum for different components. The ALP power spectrum can be estimated from the observed CMB power spectrum at the map level. So, we find the following estimator for any signal power spectrum using the maximum likelihood method <cit.> (explained in Appendix <ref>): C̃_̃ℓ̃^̃ĩ = B_ℓ^-2[ 1/2ℓ + 1∑_m = -ℓ^ℓ |a_ℓ m^obs|^2 - N_ℓ] - ∑_j ≠ i C_ℓ^j. The covariance of an estimator takes into account the limited information from an estimator at different angular scales due to the limited number of multipole modes (2ℓ + 1) from a sky map. Also, it increases when a partial sky region is considered, as in our case of considering cluster regions. It is given as: Cov(C̃_̃ℓ̃) = ⟨ C_ℓ^2 ⟩ - ⟨ C_ℓ⟩^2 = 2/(2ℓ +1)f_sky [∑_i C_ℓ^i + B_ℓ^-2N_ℓ]^2 . The deviations in the CMB power spectrum due to the ALP distortion signal can be probed against the covariance for the null map spectrum to obtain bounds on the ALP background spectrum. The derivation of the estimator and its covariance are provided in Appendix <ref>. For a weighted combination of different frequency maps smoothed to common beam resolution, (see <cit.>) applied in various cleaning techniques such as ILC, the estimator is given as: C̃_̃ℓ̃^̃ĩ = B_ℓ^-2[ 1/2ℓ + 1 (∑_m = -l^l∑_ν∑_ν' w_νw_ν'a_ℓ m,ν^obs * a_ℓ m,ν'^obs ) - ∑_ν w_ν^2 N_ℓ^ν] - ∑_j ≠ i∑_ν∑_ν' w_νw_ν'⟨ a_ℓ m,ν^j * a_ℓ m,ν'^j⟩. Here we have considered the independence of various components. § IMPACT OF FOREGROUNDS ON THE ALP SIGNAL §.§ ALP background as compared to galactic foregrounds The presence of foregrounds such as the thermal dust and synchrotron emission from the galactic plane impact the CMB polarization power spectrum substantially at low multipoles. Their effect can be mitigated by masking the galactic plane and performing a partial sky observation. Masking is performed along the galactic plane to reduce the effect of foregrounds. This leads to a partial sky observation at high latitudes. The weak signal from the polarized SZ effect will be negligible of the order of tens of nano-Kelvins, hence we can neglect it <cit.>. The thermal dust emission peaks at infrared wavelengths, hence it affects the power spectrum at high frequencies (200 GHz and above), while the synchrotron emission peaks at radio frequencies and its impact is maximum at low frequencies (below 70 GHz) (see Fig.<ref>). At frequencies 90 - 170 GHz, the effect of the foregrounds is minimal. Synchrotron emission weakens with an increase in frequency, while dust increases with frequencies in the microwave and radio regions of the EM spectrum (Fig.<ref>). ALP diffused spectrum increases with frequencies, following a D_ℓ∝ν ^2 I_cmb(ν)^2 dependence, as compared to dust which follows a modified black-body spectrum. Both the galactic thermal dust and synchrotron emissions influence the power spectrum significantly at low multipoles, but at high multipoles, they weaken out, while the ALP diffused spectrum increases with multipoles following a D_ℓ∝ℓ^2 dependence. These spatial and spectral variations of the ALP signal as compared to CMB and foregrounds can be used to detect the diffused ALP background spectrum. §.§ SNR for different CMB surveys The galaxy clusters are generated on the masked sky map in the unmasked regions. The sky fraction being observed depends on the detector, with sky fraction f_sky = 0.4 for SO, while 0.5 for CMB-S4 and CMB-HD. The ALP distortion signal is simulated in these clusters. Beam smearing (denoted by B_ℓ) occurs due to the resolution of the instrument and depends on the point source function. The combined map is then smeared with a Gaussian beam and instrument noise is added. The instrument noise for upcoming CMB surveys are taken assuming a Gaussian distribution. We check the detectability of the diffused spectrum using current and future detectors. The CAMB <cit.> is used to generate the CMB power spectrum and map. The SNR is calculated taking into account the contributions from the optimum multipole range, using both the polarized maps for various frequencies of observation. Thus, the squared signal-to-noise ratio (SNR, denoted by ρ) for the distortion signal power spectrum is found by summing over the contributions from Q and U polarized maps for the multipole range ℓ_min to ℓ_max. We use the values of ℓ_min = 1000 and ℓ_max corresponding to the beam resolution (multipole value at which B_ℓ^2 = 1 / e) to obtain the SNR: ρ_ν^2 = ∑_p = Q,U∑_ℓ_min^ℓ_max(C_ℓ,1h^p,ν + C_ℓ,2h^p,ν)^2/2/(2ℓ + 1)f_sky (C_ℓ,cmb^p + B_ℓ,ν^-2 N_ℓ^p,ν)^2, where C_ℓ and N_ℓ are the true signal and noise power spectrums respectively. Here signal corresponds to the sum of one-halo and two-halo contributions, while the denominator is the covariance on the power spectrum of the observed sky. This is explained in Sec. <ref>. The 2ℓ + 1 factor accounts for the number of modes for every multipole ℓ. Upcoming experiments such as the CMB-S4 and CMB-HD could be particularly useful in estimating this power spectrum with a lower beam and instrument noise. With regards to the current detectors, a coupling strength of 10^-12 GeV ^-1 with SO <cit.> configuration generates a maximum SNR of ∼ 0.52 (Fig. <ref>). The SNR achieves the highest value of 4.36 at the 145 GHz band in the presence of foregrounds for CMB-S4 <cit.>, due to low noise and high resolution. A SNR of ∼ 93.87 will be achievable with CMB-HD <cit.> in the 150 GHz band. The presence of galactic foregrounds reduce the SNR, especially for low (impacted by synchrotron) and high frequency channels (impacted by dust). The bands from 90 to 170 GHz face the minimum impact as both dust and synchrotron weaken in this range of frequencies. The SNR varies as g_aγ^4 with the coupling constant. § ILC TO EXTRACT AXION SIGNAL FROM MULTI-FREQUENCY MAPS The ALP signal, contaminated with foregrounds and noise, accompanied by beam smearing, is difficult to extract. The spectral shape of the ALP background signal can be used to clean contaminants like CMB and foregrounds. The standard Internal Linear Combination (ILC) may be used to obtain a clean map as it minimizes the variance of the map and extracts a given spectrum. The higher the number of frequencies, accompanied with a lower beam size, the better the results obtained after ILC. The ILC <cit.> is applied to extract the ALP-distortion signal. The method combines maps from different frequencies and assigns weights to them depending on the spectrum to be obtained (Fig. <ref>). The weighted sum is then the ILC map with the required spectrum. The weights are obtained using the covariance matrix which combines data at different frequencies. The weight matrix for standard ILC is given as: w_ilc = f_a γ^T C_s^-1 (f_a γ^T C_s^-1 f_a γ)^-1 . Here f_a γ is the ALP distortion spectrum dependence on the frequency with f_aγ∝ν I_cmb(ν). The ILC cleaned map is obtained as a weighted linear combination of the frequency maps (S^ν): S_ilc = ∑_ν w_ilc^ν S_ν. The distortion signal is simulated in galaxy clusters at different frequencies with a constant coupling constant (g_aγ= 1 × 10^-12 GeV^-1 here) for all clusters. The signal is then contaminated with CMB and galactic foregrounds. We have used PySM <cit.> to generate foreground maps. We use the synchrotron model "s-3" model which includes a curved index that flattens or steepens with frequency and dust model "d-3", which takes into account the spatial variation of spectral index on degree scales. The index is drawn from a Gaussian distribution. The various frequency maps are smoothed to a common beam resolution, given by the highest beam size among various frequency bands. The bands above 200 GHz can efficiently clean thermal dust contribution in the cleaned map. The CMB is frequency-independent, while the synchrotron emission is very weak above a frequency of 70 GHz. This lets us use just four bands with a higher beam resolution at frequencies greater than 90 GHz. Masking is done in galactic regions. The ALP signal dominates at high multipoles, owing to the one-halo term, while the foregrounds and CMB dampen out. It is the multipole range around ℓ = 3000 that can provide the best constraints on the ALP background spectrum, because at very high multipoles the noise takes over. Using ILC, we get the following SNRs for the various detectors for the ALP diffused spectrum with z_min = 1: SO - 0.24; CMB-S4 - 1.20; CMB-HD - 79.27. This SNR is the best we can achieve using ILC when the foregrounds are not well known. If they are well modelled, we can use template matching to achieve a higher SNR at the matched frequency bands corresponding to 140-150 GHz. § CONSTRAINING THE ALP COUPLING CONSTANT USING ILC We perform the Bayesian estimation of the photon-ALP coupling constant from diffused ALP spectrum. The background spectrum scales as g_aγ^4 (Fig.<ref>). Since the electron density and magnetic field information for the unresolved clusters is lacking, and from Sec.<ref>, we know that the background spectrum scales as n_e^-2 and |B|^4 (Fig.<ref>), we thus obtain bounds on the quantity which we will call the modified coupling constant: g_aγ^2|B|^2 / n_e. The fiducial values we have used for the electron density and magnetic field strength parameters are: n_0 = 10^-3 cm^-3 and B_0 = 0.1 μ G. We scale the modified coupling constant with respect to this choice of values to obtain bounds on the scaled quantity given as: A = (g_aγ/10^-12 GeV^-1)^2 ( |B|/0.1 μ G)^2 ( n_e/10^-3 cm^-3) ^-1. Since the ALP background spectrum depends significantly on the minimum redshift z_min and minimum cluster mass M_min, we obtain constraints on the modified constant for z_min = 0.5 and z_min = 1. For a given z_min, we show the bounds for different choices of M_min. We vary A with the upper bound set by the values: g_a γ = 10^-11 GeV^-1, n_0 = 0.5 × 10^-3 cm^-3 and B_0 = 0.5 μ G. We combine the posteriors from different mass ranges to obtain the bounds for a given choice of z_min. We compare the bounds on modified coupling constant obtained using ILC for different CMB surveys: SO <cit.>, CMB-S4 <cit.> and CMB-HD <cit.>. We simulate the fiducial maps at various frequencies without ALP signal (g_aγ = 0). We obtain the weights for these maps using Eq.<ref> and linearly combine them with their respective weights to obtain the ILC map. This method extracts the ALP signal, while minimizing the variance of the ILC map. We obtain the power spectrum of this map which gives us the term ∑_m = -ℓ^ℓ|a_lm^obs|^2 in Eq.<ref> at different multipoles. Since noise for different frequency maps is not correlated, we obtain the term N_ℓ in Eq.<ref> by combining weighted noise maps at various frequencies. Also, we obtain the mean power spectrum of different realizations of the combined fiducial ILC map (without ALP signal) by combining the fiducial maps at different frequencies (based on their ILC weights) to obtain the term C_ℓ^cmb + C_ℓ^fg. These give us an estimation of the ALP background spectrum C_ℓ^αα. The beam B_ℓ corresponds to the maximum beam size among the various frequency bands, as all maps are smoothed with the highest beam size before ILC weights are obtained. We use 50% partial sky to obtain bounds using CMB-S4 configuration, with frequency bands 95, 145, 220 and 270 GHz with the beam 2.2 arcmin corresponding to 95 GHz band. The bounds are stronger in case of a lower M_min as can be seen in Fig.<ref>. In principle the improvement when using a lower M_min will depend on the relation between mass and electron density of low mass galaxy clusters, which will determine their contribution to the ALP background. The constraints are also tighter in the case of z_min = 0.5 as compared to z_min = 1 (Fig.<ref>), as expected with low redshift clusters contributing to the background. Combining the posteriors for different mass ranges, we obtain the bounds on modified coupling constant using ILC with CMB-S4 configuration up to 95% confidence interval at A < 1.116 for z_min = 0.5, while A < 1.254 for z_min = 1. For SO bounds, we use 40% sky fraction. The frequency bands 93, 145, 225, 280 GHz are used with the common beam 2.2 arcmin corresponding to the 93 GHz band. The constraints are shown in Fig.<ref>. We obtain the following bounds using ILC, up to 95% confidence interval: A < 3.178 for z_min = 0.5; A < 3.724 for z_min = 1. CMB-HD will have a partial sky fraction of 50%. We use the 90, 150, 220 and 280 GHz frequency bands to obtain bounds on the modified coupling constant with the common beam resolution of 0.42 arcmin, corresponding to the band 90 GHz. The results are shown in Fig.<ref>. Using ILC, the bounds on modified coupling constant upto 95% confidence interval (C.I.) are: A < 0.115 for z_min = 0.5; A < 0.131 for z_min = 1. CMB-HD will provide bounds significantly better than CMB-S4 and SO on the modified coupling constant due to a higher beam resolution and improved sensitivity. The constraints obtained using the template matching of foregrounds are discussed in Sec.<ref>. The constraints on photon-ALP coupling constant g_aγ can be gauged using the constraints on the modified coupling constant A. We convert the constraints on A to the constraints on g_aγ using the fiducial values of the magnetic field and electron densities used to obtain A, i.e. n_0 = 10^-3 cm^-3 and B_0 = 0.1 μ G. These are plotted in Fig.<ref> with the horizontal lines representing the 95% C.I. bounds for various detectors, with the allowed coupling strengths shaded. We expect an upper bound on ALP coupling strength of g_aγ < 1.929 × 10^-12 GeV^-1 using SO and of about g_aγ < 1.119 × 10^-12 GeV^-1 using CMB-S4 configurations for z_min = 1. ILC using CMB-HD can strongly constrain the coupling to g_aγ < 4.254 × 10^-13 GeV^-1. The constraints will also depend on the ALP masses that are being considered. For higher mass ALPs (10^-13 - 10^-11 eV), the constraints will be weaker as compared to low mass ALPs (10^-15 - 10^-13 eV). Since we have considered ALPs of masses in the range 10^-15 - 10^-11 eV, we have obtained the strongest constraints for the configuration using the ALP diffused spectrum. § CONCLUSION The galaxy clusters are the largest visible gravitationally bound structures. If ALPs exist in the universe and weakly interact with photons, the CMB will bear the polarized distortion spectrum from these clusters. The ALP signal from resolved clusters can be independently studied<cit.>, while the unresolved ones would create a polarized ALP background signal. This background signal will depend on the cluster mass distribution at different redshifts, which are biased tracers of the dark matter halos. This is taken into account using the distribution of these halos and the correlation between different halos. The background will be an integrated effect of the signals from individual clusters of various masses at different redshifts. This diffused spectrum can be modelled using the distribution of halos of various masses at different redshifts. The two-halo or clustering component is low at high multipoles, but may dominate over the one-halo term or the Poissonian component for low ones (20 to 200), where it is itself dominated by the CMB and is difficult to probe. The one-halo or Poissonian component of the ALP power spectrum dominates at high multipoles and can be probed using the upcoming high resolution experiments (CMB-S4, CMB-HD) with low noise. The ALP background spectrum will depend on the astrophysical aspects like the electron density and magnetic field profiles in clusters (see Figs. <ref>). Also, it will increase with ALP coupling constant (∝ g_aγ^4) (see Fig. <ref>) and the frequency of observation (∝ν ^2 I_cmb(ν)^2)(see Fig. <ref>). The background spectrum will increase as we lower z_min, as low redshift clusters contribute significantly to the background spectrum (Fig. <ref>). With the upcoming experiments, clusters up to redshift z = 1 will be well resolvable, thus z_min = 1 is a conservative choice for the estimation of the ALP background spectrum. The power spectrum is almost independent of the choice of maximum redshift z_max after a certain redshift (∼ 3.5) as the clusters at very high redshifts contribute negligibly to the ALP background spectrum (Appendix <ref>). The cluster mass range that will contribute to the background will also determine the strength of the diffused power spectrum (Fig.<ref>). The ALP masses that may exist will also affect the background spectrum, with a weaker power spectrum for high mass ALPs. For a coupling constant of 10^-12 GeV and z_min = 1, with randomly generated cluster profiles for cluster masses 10^13 - 7 × 10^15 M_⊙, the SNR is 4.36 in the 145 GHz band of CMB-S4, while it is around 93.87 in the 150 GHz band of CMB-HD (Fig. <ref>). Also, such a diffused signal would lead to RMS fluctuations of the order of 7.5 × 10^-2 μ K for an ALP coupling constant of g_aγ = 10^-12 GeV^-1 at 145 GHz. The frequency channels 90 - 160 GHz provide the best SNR for the ALP signal, owing to a decrease in foregrounds contamination and improved beam resolution. Techniques such as ILC (see Sec.<ref>) can be used to mitigate the effect of foregrounds and CMB by using the spectral variation of the ALP signal (Fig.<ref>). Using ILC for different detectors, we obtain the following bounds on the modified coupling constant (see Sec.<ref>)- SO: A < 3.724; CMB-S4: A < 1.254; CMB-HD: A < 0.181 for the case of z_min = 1. The converted constraints on the coupling constant g_aγ can be gauged using the fiducial electron density and magnetic field strength values. The conversion provides the bounds of g_aγ < 1.783 × 10^-12 GeV^-1 with SO and g_aγ < 1.056 × 10^-12 GeV^-1 with CMB-S4 configuration for z_min = 0.5. CMB-HD can provide much tighter bounds of g_aγ < 3.912 × 10^-13 GeV^-1. For z_min = 1, the bounds go as: SO - g_aγ < 1.929 × 10^-12 GeV^-1 ; CMB-S4 - g_aγ < 1.119 × 10^-12 GeV^-1; and CMB-HD - g_aγ < 4.254 × 10^-13 GeV^-1. The template matching of foregrounds can tighten the constraints on coupling constant as shown in Appendix <ref>. The shape of the foregrounds power spectrum and their violation of statistical isotropy can be used to reduce further contamination of foregrounds. This can help in achieving a higher SNR and better constraints. The galaxy clusters, with their intracluster medium (ICM) and galaxies make sites for numerous processes such as the SZ effects, lensing, CIB, synchrotron, etc. The ALP power spectrum can also be cross-correlated with diffused spectrums from these phenomena and the large scale structure to obtain better constraints on the ALP coupling constant and masses. In this analysis, we have neglected the effect of evolution of galaxy clusters with redshift. Our results depend on the consideration of random electron density and magnetic field profiles for the clusters. Due to the low redshift clusters being way more than the high redshift clusters, most of the contribution to the calculated background comes from low redshift clusters which have higher halo mass function values and for which the polarization signal remains intact. Thus, a study of the evolution of the cluster profiles with masses at different redshifts can provide bounds on this diffused spectrum. Not only this, being able to connect the masses of galaxy clusters and their electron density profiles would further help in constraining the photon-ALP coupling constant. This can be done (as shown in the case of SZ effect by <cit.>) using hydrodynamical simulations like Romulus <cit.>, SIMBA <cit.>, etc. Probing the high multipoles with improved detectors will help in obtaining bounds on this diffused signal in the future. The background ALP sky can thus, in addition to probing the signal from resolved clusters, act as an independent way of obtaining constraints on the ALP coupling constant, by studying its spectral and spatial variation over a wide range of frequencies and multipoles. § THE ALP DISTORTION POWER SPECTRUM FOR A SINGLE CLUSTER | Α _ℓ|^2 . The photon-ALP conversion is confined to galaxy clusters which occupy small angular scales on the sky. For a particular ALP mass range, the signal is formed in a spherical shell around the cluster center for a spherically symmetric galaxy cluster. These shells are projected as a disk in 2d. This signal disk increases the ALP power spectrum at low angular scales. It is the higher multipoles that contain ALP signatures in the CMB spectrum as multipoles vary inversely with the angular scales Δθ∝180^o/ℓ. The α_ℓ's are the coefficients obtained from the spherical harmonics expansion of the polarization fluctuations caused due to the photon-ALP conversion. The |α_ℓ|^2 is the power spectrum of these polarization fluctuations due to the ALP distortion signal. The estimation of this power spectrum due to temperature or polarization fluctuations in a map is explained in Appendix <ref>. The astrophysics of galaxy clusters affect their electron density and magnetic field profiles, and hence the ALP distortion spectrum |α_ℓ|^2. For clusters with stochastic and turbulent electron density and magnetic field profiles, the polarization information of the ALP distortion will be lost as depolarization will be caused due to multiple conversions. For such cases, hydrodynamical simulations or observations can be used that fit the data well. We consider the case where this turbulence is negligible and model the electron density and magnetic profiles using smooth profiles, which fit the profiles for resolved clusters well. The electron densities can be obtained via X-ray emission and inverse-compton (Sunyaev-Zeldovich) effect in galaxy clusters. The electron density profile used is a modified beta model that varies radially and takes into account the slope at large radii, a cusp core that follows a power law, and the higher electron density in the inner regions <cit.>: n_e^2 = Z[ n_0^2 (r/r_c1)^-α/(1 + r^2/r_c1^2)^3β - α /21/(1 + r^γ/r_s^γ)^ϵ / γ + n_02^2/(1 + r^2/r_c2^2)^3β_2]. The photon-ALP resonant conversion depends on the transverse magnetic field along the line of sight. The transverse magnetic field profile can be modelled using synchrotron emission by studying the galaxy cluster at radio frequencies. We consider a magnetic field profile (that models well for clusters at low redshifts) whose strength simply scales with distance from the cluster center <cit.>: B(r) = B_0 r^-s. The contribution to the Q and U polarization stokes modes depends on the magnetic field direction at the conversion location. The magnetic field direction has been assumed to be uniformly randomly oriented. The magnetic field coherence scale is assumed to be greater than the beam of the instrument, otherwise the signal will be depolarized by the turbulent fields. The power spectrum |α_ℓ|^2 is that of the combined power in the two maps for which the polarized intensity I_pol is given as: I_pol = √(I_Q^2 + I_U^2). The power spectrum |α_ℓ|^2 will also vary for the mass range of ALPs being considered. ALPs of masses in a particular sub-range of the mass range 10^-15 - 10^-11 eV may be assumed to be forming in the galaxy clusters if the resonant condition is satisfied (m_a = m_γ). For our analysis, we assume that if ALPs exist, all ALPs of masses in the range 10^-15 - 10^-11 eV will be produced during conversion. Also, we assign a uniform coupling constant of 10^-12 GeV^-1 for all ALP masses in this range. The features of the ALP power spectrum for a single cluster |α_ℓ|^2 depend on the host cluster and its properties that characterize the ALP distortion signal. The variation of the ALP power spectrum from a single cluster |α_ℓ|^2 with the photon-ALP coupling constant and the frequency of observation is shown in Fig. <ref>. The ALP distortion signal scales as g^2_a,γ and ν I_cmb(ν) (Eq. <ref>) for frequencies of observation in the microwave and radio bands. Thus, the power spectrum |α_ℓ|^2 varies as g^4_a,γ and ν ^2 I_cmb(ν)^2, where I_cmb(ν) is the Planck black-body function (see Eq.<ref>). Also shown in Fig.<ref> is the CMB power spectrum at small scales around the cluster region. The probability of conversion at a location is inversely proportional to the gradient of electron density. An increase in the electron density (n_e ∝ n_0) (Fig. <ref>) reduces the ALP power spectrum strength. The electron density steepness parameter(β_1) and scaling radius (r_s) control the rate at which the electron density varies with distance from cluster center. The spectrum increases at high multipoles with decreasing scaling radius and increasing steepness parameter as the electron density reduces in the outer regions of the cluster (Fig. <ref> and <ref>). Also, the spectrum is directly proportional to the square of the magnetic field. So, a steep decrease in magnetic field (parameterized by "s") leads to a weaker spectrum as the magnetic field reduces in the outer regions of the cluster where the conversion probability is high (Fig. <ref>). The effect of change in profiles on |α_ℓ|^2 is dominated by the the effect in the production of low mass ALPs. This is because the strength of the ALP distortion signal is generally high for low mass ALPs, owing to higher conversion probabilities in the outer regions of the cluster with low electron densities (∝1/|∇ n_e(r)|). This, in turn shows up in the effect on |α|^2. If ALPs of masses in only a sub-range of what we have considered (10^-15 - 10^-11 eV) are formed, the dependence of the ALP background spectrum on these profile parameters can change, especially in the case of formation of only high mass ALPs. This is considered in a separate work <cit.>. The power spectrum |α_ℓ|^2 also depends on the redshift of the host cluster, scaling proportionally to 1 +z, where z is the redshift of the cluster. This happens as the conversion depends on the photon frequency at the conversion location (which is within the cluster) and this photon then gets cosmologically redshifted on its travel from the cluster to us due to Hubble expansion <cit.>. The shape of |α_ℓ|^2's at high multipoles depends on the magnetic field orientation that characterizes the polarization of the ALP signal. If the magnetic field orientation is uniformly random, as in our analysis, |α_ℓ|^2 scales as ℓ^0 (D_ℓ as ℓ^2) at high multipoles. § ALP BACKGROUND SPECTRUM VARIATION WITH MAXIMUM REDSHIFT In principle, clusters from very high redshifts will contribute to the diffused spectrum. But this contribution will be very low and the power spectrum is nearly independent of the maximum redshift z_max up to which we consider the distribution of clusters as can be seen in Fig. <ref> as the spectrums at various redshifts overlap. The halo mass function decreases significantly with increasing redshift and hence, the number of clusters contributing at very high redshifts (z_max > 3.5) to the one and two-halo terms decreases significantly. Also, as a polarized photon travels from high redshifts, it may get depolarized due to multiple scatterings in galaxy clusters. Hence, it is mainly the polarized distortion signals from low redshift clusters that remain intact and contribute to the background ALP spectrum. This explains the choice of z_max of 7 for our analysis as it reduces computation time. § POWER SPECTRUM ESTIMATION FROM A CMB MAP We follow the derivation given in <cit.>. The fluctuations in the CMB or in any signal (foregrounds, ALP distortion, SZ effect, etc.) can be expanded in terms of spherical harmonics as: Δ^net = ∑_ℓ = 0^∞∑_m=-ℓ^ℓ a_ℓ mY_ℓ m(θ,ϕ). The power spectrum is given as: ⟨ a_ℓ m^net* a_ℓ' m'^net⟩= C_ℓ^netδ_ℓℓ'δ_m m'. Here we consider the CMB primordial fluctuations and fluctuations from ALP conversion and foregrounds: Δ^net = Δ^cmb + Δ^ax + Δ^fg. The net power spectrum is given as the ensemble average (with i and j running over the components): C_ℓ^net = ⟨ a_ℓ m^ax* a_ℓ m^ax⟩ = ∑_j∑_i⟨ a_ℓ m^i* a_ℓ m^j⟩ = ⟨ a_ℓ m^cmb* a_ℓ m^cmb⟩ + ⟨ a_ℓ m^ax* a_ℓ m^ax⟩ + ⟨ a_ℓ m^fg* a_ℓ m^fg⟩. Here we have considered the signals to be independent of each other and neglected the cross terms. The finite beam resolution and instrumental noise of the experiment change the spherical harmonic coefficient as: a_ℓ m^obs = B_ℓ(a_ℓ m^cmb + a_ℓ m^ax + a_ℓ m^fg) + η_ℓ m, where B_ℓ = exp(-ℓ(ℓ+1)θ_beam^2 / 2) and η_ℓ m are the fourier coefficients introduced due to instrumental noise. The coefficients a_ℓ m^is are assumed to follow a Gaussian distribution with mean zero and variance given by the corresponding C_ℓ^is, i.e., P(a_ℓ m^i|C_ℓ ^i ) = 1/√(2π C_ℓ^i) exp ( - |a_ℓ m^i|^2/2 C_ℓ^i). The noise power spectrum is obtained as: ⟨η_ℓ m* η_ℓ 'm'⟩ = N_ℓ^iδ_ℓℓ 'δ_mm'. The a_ℓ m^obss are assumed to follow a Gaussian distribution with mean B_ℓ∑_ia_ℓ m^i and variance N_ℓ given as: P(a_ℓm^obs|{a_ℓ m^i } ) = 1/√(2π N_ℓ) exp ( - |a_ℓ m^obs - ∑_i B_ℓ a_ℓ m^i|^2/2 N_ℓ). Using Baye's theorem, we have: P(a_ℓ m^obs|{C_ℓ^i } ) = ∏_m=-ℓ^ℓ∫∫∫∏_i d a_ℓ m^i P(a_ℓ m^obs | a_ℓ m^i) P(a_ℓ m^i|C_ℓ^i ), = [2π (B_ℓ^2 ∑_iC_ℓ^i + N_ℓ)]^-(2ℓ + 1)/2 exp [ ∑_m = -ℓ^ℓ -|a_ℓ m^obs|^2/2(∑_iC_ℓ^i B_ℓ^2 + N_ℓ)] . We find the maximum likelihood estimator for the power spectrum of the component i by differentiating with respect to C_ℓ^ax and setting equal to zero. It can be written as: C̃_̃ℓ̃^̃ĩ = B_ℓ^-2[ 1/2ℓ + 1∑_m = -ℓ^ℓ |a_ℓ m^obs|^2 - N_ℓ] - ∑_j ≠ i C_ℓ^j. § CONSTRAINTS ON ALP COUPLING CONSTANT USING TEMPLATE MATCHING OF FOREGROUNDS Template matching assumes the scaling of foregrounds power spectrums with frequencies and can be used to obtain stronger bounds on the modified coupling constant using a lower beam size. We perform template matching to look for the improvement in constraints. Even after masking, the effect of these foregrounds remains even at high latitudes. Thus, a modelling of these foregrounds is required to account for their impact on the power spectrum. These are modelled using high frequencies (ν > 200 GHz) for modelling of dust, while low frequencies (ν < 70 GHz) for synchrotron emission. We perform a template matching of foregrounds for CMB-S4 configuration to look for any improvement on the bounds on modified coupling constant. We assume we know the scaling of galactic synchrotron and dust emissions with frequencies in the microwave and radio regions of the EM spectra. To model the "s-3" synchrotron model used in our mock data, we use the following equation to account for its curved index <cit.>: C_ℓ^syn(ν) = A_ℓ( ν/ν_0)^2β_s + 2C ln (ν / ν_c). The curved spectral index shows a steepening or flattening with frequency above the frequency ν_c. The fiducial values used are C = -0.052, ν_c = 23 GHz and β_s = -3. The modified black-body function with β_d = 1.58 is used to fit the dust model "d-3": (C_ℓ)_dust = A_ℓν ^2β_d B^2_ν(T). By modelling the foreground emissions at low (for synchrotron) and high (for dust) frequencies, the scaling with frequency can be used to obtain the contribution of synchrotron and dust at the frequencies 90 - 160 GHz. This method assumes the scaling of foregrounds with frequencies for multipole range ℓ > ℓ_max corresponding to the beam of the instrument at the matched frequency of 145 GHz for CMB-S4 and SO, while 150 GHz for CMB-HD. We find the fiducial (non-ALPs) contribution of CMB and foregrounds at the required frequency by making different map realizations without ALP signal and calculating the mean power spectrum of those maps. This enables us to scale the ALP diffused spectrum at the matched frequency with respect to the residual of the mock data spectrum and fiducial spectrum. This scaling is compared against the covariance at that frequency (Eq.<ref>) to obtain bounds on the modified ALP coupling constant. The constraints obtained on the modified ALP coupling constant A are shown for various CMB surveys for minimum redshifts z_min = 0.5 and z_min = 1 in Figs. <ref> (CMB-S4: A < 0.654 with z_min = 0.5 and A < 0.734 with z_min = 1 ), <ref> (SO: A < 2.005 with z_min = 0.5 and A < 2.325 with z_min = 1 ) and <ref> (CMB-HD: A < 0.115 with z_min = 0.5 and A < 0.131 with z_min = 1 ). The constraints are expected with CMB-HD giving the tightest constraints. The constraints from template matching are stronger, as compared to ILC for all the three detectors (SO, CMB-S4 and CMB-HD). The converted constraints (95% C.I.) on ALP coupling constant g_aγ using template matching are shown in Fig.<ref>. With template matching, we will be able to get the following bounds with z_min = 0.5: SO - g_aγ < 1.416 × 10^-12 GeV^-1 ; CMB-S4 - g_aγ < 8.087 × 10^-13 GeV^-1; CMB-HD - g_aγ < 3.391 × 10^-13 GeV^-1, while the bounds with z_min = 1 go as: SO - g_aγ < 1.525 × 10^-12 GeV^-1 ; CMB-S4 - g_aγ < 8.567 × 10^-13 GeV^-1; CMB-HD - g_aγ < 3.619 × 10^-13 GeV^-1. Both template matching and ILC assume the shape of the foregrounds power spectrum for ℓ > ℓ_max, but since the beam size is larger for ILC (lower ℓ_max), template matching provides better bounds with a higher ℓ_max corresponding to a lower beam size at the matched frequency for all detectors, but is less reliable as the modelling of foregrounds at different frequencies may not be precise leading to bias in the constraints. This work is a part of the ⟨⟩ , supported by the TIFR and the Department of Atomic Energy, Government of India. The authors express their gratitude to the TIFR CCHPC facility for meeting the computational needs. Furthermore, we would also like to thank the Simons Observatory (SO), CMB-S4 and CMB-HD and collaborations for providing the instrument noise and beam resolutions. Also, the following packages were used for this work: Astropy <cit.>, , NumPy <cit.> CAMB <cit.>, SciPy <cit.>, SymPy <cit.>, Matplotlib <cit.>, HEALPix (Hierarchical Equal Area isoLatitude Pixelation of a sphere)[Link to the HEALPix website http://healpix.sf.net]<cit.>, PySM <cit.> and Cluster Toolkit <cit.>. JHEP.bst
http://arxiv.org/abs/2405.10035v1
20240516121603
A quality control analysis of the resting state hypothesis via permutation entropy on EEG recordings
[ "Alessio Perinelli", "Leonardo Ricci" ]
q-bio.NC
[ "q-bio.NC" ]
[]alessio.perinelli@unitn.it Department of Physics, University of Trento, 38123 Trento, Italy INFN-TIFPA, University of Trento, 38123 Trento, Italy Department of Physics, University of Trento, 38123 Trento, Italy CIMeC, Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy The analysis of electrophysiological recordings of the human brain in resting state is a key experimental technique in neuroscience. Resting state is indeed the default condition to characterize brain dynamics. Its successful implementation relies both on the capacity of subjects to comply with the requirement of staying awake while not performing any cognitive task, and on the capacity of the experimenter to validate that compliance. Here we propose a novel approach, based on permutation entropy, to provide a quality control of the resting state condition by evaluating its stability during a recording. We combine the calculation of permutation entropy with a method for the estimation of its uncertainty out of a single time series, thus enabling a statistically robust assessment of resting state stationarity. The approach is showcased on electroencephalographic data recorded from young and elderly subjects and considering both eyes-closed and eyes-opened resting state conditions. Besides showing the reliability of the approach, the results showed higher instability in elderly subjects that hint at a qualitative difference between the two age groups with regard to the distribution of unstable activity within the brain. The method is therefore a tool that provides insights on the issue of resting state stability of interest for neuroscience experiments. The method can be applied to other kinds of electrophysiological data like, for example, magnetoencephalographic recordings. In addition, provided that suitable hardware and software processing units are used, its implementation, which consists here of a posteriori analysis, can be translated into a real time one. A quality control analysis of the resting state hypothesis via permutation entropy on EEG recordings Leonardo Ricci May 20, 2024 ==================================================================================================== Countless studies in neuroscience rely on the analysis of experimentally recorded electrophysiological activity of the human brain during the so-called resting state, i.e. when subjects are not performing any cognitive task. However, controlling a subject's compliance with this condition is usually not possible during an experiment, and the recorded data might be contaminated by unwanted and unknown activity. In this work, we propose an approach to mitigate this typically overlooked issue by analyzing electroencephalographic data and detecting possible drifts in brain dynamics, corresponding to an unstable resting state condition that compromises the recording's reliability. To carry out this analysis we rely on permutation entropy, an information-theoretical measure of complexity of a time series. A statistically significant drift in permutation entropy along a recording is linked to a lack of stationarity, thus providing a marker of instability of the resting state. Our method makes up a simple and powerful tool to carry out quality control of electrophysiological data, as well as of other time series for which the detection of non-stationarity is essential. § INTRODUCTION The characterization of brain states by relying on the analysis of electrophysiological recordings is a fundamental issue in neuroscience and a basic tool in the medical practice. Among the possible brain states, the so-called resting state plays a major role in the investigation of brain functional organization <cit.>. The interest in resting state brain dynamics was first prompted by Biswal et al. <cit.> in the context of functional magnetic resonance imaging (MRI). Since then, the resting state paradigm has been widely used also in connection with electroencephalographic (EEG) and magnetoencephalographic (MEG) analytical techniques <cit.>. A well-known result concerning the brain organization in resting state is provided by the identification, primarily via functional MRI, of resting state networks <cit.>. Among these, the “default mode” network is possibly the most studied one <cit.>. The definition of the resting state condition is procedural: tipically, resting state corresponds to subjects being instructed by an operator to “do nothing”, i.e. not to perform any cognitive task, while either keeping their eyes closed or fixating a static marker on a screen. Once the instruction is provided, the subject is entrusted with its implementation. As a matter of fact, the operator does not have more than trust to depend on in order to establish the reliability of the experiment outcome. How to objectively determine the quality of the resting state implementation is an unsolved issue. Two different approaches can be envisaged. A first possibility is to devise rigorous and reproducible protocols to be followed when conducting resting state experiments <cit.>. Indeed, as some researchers have pointed out <cit.>, a lack of a shared consensus on the implementation of a resting state is a possible reason for the ongoing underutilization of resting state measurements in clinical applications. However, regardless of how meticulously it is implemented, a protocol cannot provide any a posteriori assessment of the reliability of recorded data. Conversely, the second approach consists of assessing the stability of resting state by means of a measurable quantity. This strategy was followed, for example, by using graph metrics in order to evaluate the reliability of resting state brain networks identified through functional MRI <cit.>. The goal of the present paper is to address the problem of stability of the resting state assumption by analyzing segments of EEG time series to detect variations in the dynamical state of the brain. To this purpose, permutation entropy <cit.> (PE) was used as a marker of time series complexity. Indeed, the analysis of EEG recordings has been increasingly tackled by means of approaches typical of information theory: PE, which has become widespread thanks to its simplicity of implementation, was proposed, for example, as a mean to automatically detect epileptic seizure <cit.> and, more recently, as a biomarker for Alzheimer's disease <cit.>. The data considered here are EEG recordings of healthy young and elderly subjects and reconstructed at 30 brain locations. Data are extracted from the LEMON public database <cit.>, in which resting state EEG recordings are available both in eyes-closed (EC) and eyes-opened (EO) conditions. The availability of several interleaved EC-EO segments for each subject provided the necessary segmentation to evaluate resting state stability. Although one might argue that alternating EC-EO states violates the pure resting state assignment, the simplicity of the task is not expected to affect the resting state condition, while avoiding mental drifts like falling asleep or boredom. The analysis proposed here is carried out by evaluating PE on different segments under the null hypothesis of its constancy during the whole recording in each one of the two conditions. The analysis immediately calls for the necessity of estimating the uncertainty of PE assessments. To this purpose, we relied on a recently developed technique <cit.> that allows for overcoming the problem of each time series being a singleton. The scope of application of the method proposed in the present work is not limited to neuroscience. Indeed, the approach discussed here is useful in any field where it is necessary to detect nonstationarity of a system's dynamics out of time series. The present paper is organized as follows. The available dataset, the related preprocessing, as well as a summary of the method for the evaluation PE and its uncertainty are described in Sec. <ref>. The evaluation of the stability of the resting state for a single subject and node is the topic of Sec. <ref>, whereas Sec. <ref> addresses the dependence of stability on age, condition and brain location. In Sec.<ref> we briefly address the complexity of stable resting states. Conclusive remarks on the implications of our results are discussed in Sec. <ref>. § MATERIALS AND METHODS §.§ Dataset and preprocessing EEG data used in the present work belong to the Leipzig Mind-Brain-Body “LEMON” database <cit.>, which is publicly available <cit.>. Data were recorded in compliance with the Declaration of Helsinki and the related study protocol was approved by the ethical committee at the University of Leipzig (reference number 154/13-ff, see also Ref. <cit.>). The available raw recordings were acquired in a sound-attenuated EEG booth by means of a 62-channels active ActiCAP EEG device whose electrodes were attached according to the international standard 10-20 system. The corresponding signals are digitized with a sampling rate of 2.5 kHz upon bandpass-filtering them between 0.015 Hz and 1 kHz. Further details on the data acquisition protocol are available in Ref. <cit.>. The dataset considered here corresponds to two sets of subjects: a “young” set of 30 healthy subjects in the age range between 20 and 35 years old (19 males, 11 females), and an “elderly” set of 30 healthy subjects in the age range between 60 and 80 years old (19 males, 11 females). Data preprocessing and source reconstruction procedure are described in details in Ref. <cit.> (Appendix B) and in Ref. <cit.> (Sec. 2). For the sake of clarity, we summarize here the key steps. Raw EEG recordings were first filtered within the frequency band between 0.1 Hz and 40 Hz by relying of fourth-order high-pass and low-pass filters; second, the power line frequency at 50 Hz was removed by means of a notch filter; third, the sampling frequency was reduced to 250 Hz by downsampling the data. Artifacts due to cardiac, muscular and eye activity were removed by relying on an independent component analysis. To carry out source reconstruction, head models were built out of the individual MRI scans provided in the LEMON database. Source activity was reconstructed by means of the exact low-resolution electromagnetic tomography algorithm (“eLoreta”) that provided current dipoles with unconstrained orientations on a 10 mm template grid. The extracted sequences correspond to current dipole power reconstructed at 30 brain locations (henceforth referred to as “nodes”) selected as the centroids of 30 brain regions among the 360 defined in the atlas by Glasser et al. <cit.>. The regions considered are V1, V6A, V4 (occipital); 4, 5L, 6mp (central); Pfm, PF, STV (parietal); STGa, TE1a, TA2 (temporal); 10d, 10pp, p10p (frontal); each region was selected symmetrically in both hemispheres. Each EEG acquisition run consists of 16 alternated segments in EC (8 segments) and EO (8 segments) conditions. To reduce transient effects, we selected 12 consecutive segments, namely 6 EC and 6 EO, by discarding the first and last two segments of each run. Each raw segment covers between ≳ 60 s and 90 s: with the same purpose of reducing transients, we trimmed each segment to 60 s, corresponding to 15000 samples, by symmetrically removing the leading and trailing data points. §.§ Permutation entropy To compute permutation entropy, m-dimensional trajectories 𝐲_n are constructed by selecting m consecutive elements from a scalar time series Y = {y_n}: 𝐲_n = (y_n, y_n+1, …, y_n+m-1). A trajectory is then encoded into a permutation (s_n,1,…,s_n,m), were s_n,j are integer numbers each corresponding to the rank of y_n+j-1 within the trajectory 𝐲_n. As the permutation elements s_n,j belong to the range [1,m], the number of possible permutations is m!. For each possible permutation, the observed rate p̂_S is assessed as the occurrence frequency of that permutation. Permutation entropy is then estimated out of the observed rates p̂_S according to the following expression: Ĥ_m(Y) = -∑_S(p̂_S lnp̂_S) + M̂ - 1/2(N-m+1) , where the sum corresponds to the so-called plug-in estimator, while the second term is the Miller-Madow correction <cit.> that compensates the plug-in estimator bias and depends on the time series length N and the number M̂ of observed permutations (M̂⩽ m!). (We henceforth assume that 0 ln 0 = 0). The growth rate of PE with the dimension m corresponds, asymptotically, to the Kolmogorov-Sinai (KS) entropy of the underlying source. However, because the number of possible permutations grows as m!, the evaluation of PE for m ≳ 10, and thus of the KS entropy, is impractical. Consequently, PE is instead typically employed at fixed m as a marker of complexity or—as in the present work—to detect nonstationarity by evaluating it on different segments of a time series <cit.>. In this context, the demand of a large m has to be traded off against the fact that an unbiased estmation of PE requires the length of the input time series to be much larger than the number of observed permutations. The analysis discussed here was carried out by considering trajectories having dimension m = 7. §.§ Permutation entropy uncertainty estimation A key requirement of the analysis discussed in the present work is the estimation of the uncertainty associated to each PE assessment. To this purpose, we apply a recently developed method <cit.> that relies on the construction of a set of proxy time series out of the original one via surrogate generation <cit.>. Specifically, the surrogate generation algorithm used here is the iterative amplitude-adjusted Fourier transform (IAAFT) algorithm <cit.>, whereby the amplitude distribution and the (approximate) autocorrelation of the original time series are preserved in the surrogate ones. Upon generating a number L of surrogate time series and computing the corresponding PE values, the uncertainty on the PE of the original time series is provided by the standard deviation of the L surrogate PE assessments multiplied times a suitably trimmed scaling factor α. For a detailed description of the method, the reader is referred to Ref. <cit.> (Sec. 3). Following the guidelines provided therein, the number of surrogate time series to be generated for each evaluation was set here to L = 100 and the scaling factor α was set to 2. § ASSESSING THE STABILITY OF THE RESTING STATE For each subject, node and condition, six values of PE and the related uncertainty are estimated by means of the steps described in Secs. <ref> and <ref>. Figure <ref> shows an example for a single subject belonging to the young group and a single node. The stability of the resting state is then evaluated on each of the two sets (EC and EO) of six PE assessments as follows. First of all, stability is assumed to occur whenever PE is independent of time, i.e. fluctuations of PE between segments are statistically compatible with the uncertainty associated to the PE values. To this purpose, a constant value ĥ, corresponding to a horizontal line in the plot, is fitted to the data. The p-value corresponding to the resulting χ^2 statistic is then evaluated by relying on a χ^2 distribution with 5 degrees of freedom. Consequently, given a subject, node and condition, resting state is deemed to be “unstable” if the p-value falls below the Bonferroni-corrected significance threshold of 0.05 / 30 ≃ 0.0017. The values of the best-fit constant ĥ for the EC, EO conditions are equal to 5.0 ± 0.1 and 5.29 ± 0.04, respectively. The χ^2 test provides, for the two conditions, p-values equal to 2· 10^-10 and 0.087, leading to the conclusion that resting state under the EC condition is unstable, whereas stability occurs under the EO condition. § STABILITY ANALYSIS IN TERMS OF AGE, CONDITION, AND BRAIN REGION For each node, the number of subjects yielding an “unstable” resting state in the EC condition is shown in Fig. <ref>. The analogous assessment concerning the EO condition is instead shown in Fig. <ref>. In both figures, data are displayed separately for the two sets of subjects, young and elderly. The data displayed in Figs. <ref>, <ref> suggest that “instability” of resting state is not uncommon: the number of subjects in which an “unstable” resting state condition is detected is close to one third of the whole set of subjects. Averaging on the 30 nodes, the frequency of unstable nodes for the young group is ≃ 26% (EC) and ≃ 28% (EO), while for the elderly group it is equal to ≃ 44% (EC) and ≃ 34% (EO). In other words, as it results in Figs. <ref>, <ref>, the number of subjects exhibiting “instability” is generally larger for the elderly group than for the young group, regardless of the brain location. This observation can be explained by elderly subjects tending to be more affected by fatigue. One might wonder whether stability is a consistent property among nodes of the same subject, namely whether instability of resting state concerns the whole brain or, rather, few localized areas. To this purpose, Fig. <ref> shows, for the young set of subjects, a color map of stability as a function of subject and node. Specifically, to each subject-node pair, we assigned a white color if both EO and EC are deemed to be stable, and a different color in the case of unstable EC (light green), unstable EO (bluish green), or both unstable EC and EO (dark purple). The same analysis is reported in Fig. <ref> for the elderly set of subjects. Besides the color map, each figure also reports an histogram of the number of subjects as a function of the number of stable nodes. While the distribution for young subjects is right-skewed, with most subjects having a number of stable nodes ≳ 20, the distribution for elderly subjects appear to be more noisy, with the majority of subjects exhibiting either a “globally stable” or a “globally unstable” resting state. This observation might reflect an underlying lower modularity in elderly subjects <cit.>, namely the fact that brain dynamics tends to involve many areas. § COMPLEXITY OF STABLE RESTING STATES Whenever a combination of subject, node and condition is deemed to be stable, the corresponding ĥ value, which is indeed an average PE, can be taken as a marker to characterize the complexity of the underlying dynamics. The distributions of ĥ values corresponding to stable resting state are shown in Fig. <ref>, grouped by age and condition. It is first worth highlighting that the maximum PE value with m = 7, corresponding to white noise time series, is ln 7! ≃ 8.525: according to the data shown in Fig. <ref>, the EEG recordings analyzed here do not correspond to a purely stochastic dynamics. The four distributions displayed in Fig. <ref> are comparable in width and shape, though with a slight shift towards higher PE values occurring in the elderly sets. A two-tailed t-test between the young-EC and elderly-EC data rejects the null-hypothesis of equal means (p < 0.001); the same outcome is obtained by testing the young-EO, elderly-EO sets. § DISCUSSION In this work, we exploited EC-EO conditions as a benchmark to study resting state stability. The EC-EO paradigm is typically studied in terms of the spectral changes, most notably the presence of alpha rhythms in EC (<cit.>). More recently, the EC-EO paradigm was analyzed through an information-theoretic approach in two works. In the first one <cit.>, the authors analyzed EEG recordings under the two conditions by applying a method <cit.> aimed at assessing the transition probabilities of ordinal patterns (i.e. permutations) and determining the so-called network-averaged node entropy and asymmetry coefficient. The authors showed that the EO condition is characterized by higher entropy values, accompanied by a lower asymmetry coefficient, with respect to the EC condition. In the second paper <cit.> the authors described an approximate entropy analysis of EEG recordings recorded in the two conditions, again showing that the EO condition provides higher entropy values than the EC. Those results are in agreement with the ones obtained with the present analysis: we also found higher PE values in the EO condition. It is worth remarking that both works mentioned above carried out the respective analyses in sensor space: while this approach is computationally less demanding, one has to keep in mind that the influence of volume conduction might produce spurious results, as it was shown in the case of connectivity measures <cit.>. The application of information-theoretical tools to electrophysiological data is often lacking an estimation of the related uncertainty, despite the availability of asymptotic formulas <cit.>: the significance of the quantities inferred in the analysis is typically based on averaging over a set of subjects. Conversely, the approach followed here, which relies on surrogate-based estimation of PE uncertainty, is capable of assigning an uncertainty value to each PE assessment. This capability paves the way to the possibility of drawing subject-specific quantitative conclusions from data. Besides the present application to quality control of resting state stability, such possibility is of primary importance in the development of diagnostic biomarkers based on EEG measures <cit.> and other kinds of electrophysiological data like, for example, magnetoencephalographic recordings. As final remarks, it is worth noting that a real time implementation of the method just depends on the hardware and software units that are used to process data. In addition, the method proposed in the present work is not limited to neuroscience: its scope can be widened to include any field where it is necessary to detect nonstationarity of a system's dynamics out of singleton time series. § CONFLICT OF INTEREST STATEMENT The authors have no conflicts to disclose. § ETHICS APPROVAL EEG data used in the present work belong to the Leipzig Mind-Brain-Body “LEMON” database, for which the related study protocol was approved by the ethical committee at the University of Leipzig (reference number 154/13-ff). Further details are available in Ref. <cit.>. § DATA AVAILABILITY STATEMENT Raw EEG recordings used in the present work are available in the LEMON database at <http://fcon_1000.projects.nitrc.org/indi/retro/MPI_LEMON.html>, Ref. <cit.>. unsrt
http://arxiv.org/abs/2405.09972v1
20240516103239
Predicting Solar Heat Production to Optimize Renewable Energy Usage
[ "Tatiana Boura", "Natalia Koliou", "George Meramveliotakis", "Stasinos Konstantopoulos", "George Kosmadakis" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.SY", "eess.SY" ]
A]Tatiana Boura0009-0008-0656-4372Corresponding Author. Email: tatianabou@iit.demokritos.gr. A]Natalia Koliou0009-0004-3920-9992 B]George Meramveliotakis0000-0002-0110-347X A]Stasinos Konstantopoulos0000-0002-2586-1726 B]George Kosmadakis0000-0002-3671-8693 [A]Institute of Informatics and Telecommunications, NCSR Demokritos, Ag. Paraskevi, Greece [B]Institute of Nuclear & Radiological Sciences and Technology, Energy & Safety, NCSR Demokritos, Ag. Paraskevi, Greece Utilizing solar energy to meet space heating and domestic hot water demand is very efficient (in terms of environmental footprint as well as cost), but in order to ensure that user demand is entirely covered throughout the year needs to be complemented with auxiliary heating systems, typically boilers and heat pumps. Naturally, the optimal control of such a system depends on an accurate prediction of solar thermal production. Experimental testing and physics-based numerical models are used to find a collector's performance curve — the mapping from solar radiation and other external conditions to heat production — but this curve changes over time once the collector is exposed to outdoor conditions. In order to deploy advanced control strategies in small domestic installations, we present an approach that uses machine learning to automatically construct and continuously adapt a model that predicts heat production. Our design is driven by the need to (a) construct and adapt models using supervision that can be extracted from low-cost instrumentation, avoiding extreme accuracy and reliability requirements; and (b) at inference time, use inputs that are typically provided in publicly available weather forecasts. Recent developments in attention-based machine learning, as well as careful adaptation of the training setup to the specifics of the task, have allowed us to design a machine learning-based solution that covers our requirements. We present positive empirical results for the predictive accuracy of our solution, and discuss the impact of these results on the end-to-end system. Keywords: Solar energy; Renewable energy; Deep learning § INTRODUCTION Efficient utilization of renewable energy, especially solar energy through solar thermal or PV/thermal (PVT) collectors, is a promising solution to meet the needs for heating and domestic hot water, offering substantial advantages with respect to both environmental footprint and operational costs. Particularly during mid-seasons and summer, the available solar radiation effectively eliminates the need for conventional boilers. However, to ensure that user demand is entirely covered throughout the year, collectors are complemented with auxiliary heating systems, typically boilers and heat pumps. The controller of such a system should minimize the use of auxiliary heating by employing insulated hot water tanks to shift the load from when demand is high to when there is excess production. Although this is relatively straight-forward for space heating, combined space heating and domestic hot water usage is more challenging as the latter experiences demand spikes. Individual usage patterns can be user-parameterized or statistically estimated, but optimal control also requires reliable prediction of solar thermal production to match the demand and reduce heat losses from the storage tank. Given the importance of estimating solar thermal collectors' performance, it is only natural that the problem has been approached from multiple angles. Based on experimental testing, the methods for estimating the efficiency of collectors under specified conditions have been standardized (ASHRAE 93, ISO 9806, EN12976-2). These methods provide the parameters required to predict the long term performance of solar thermal collectors <cit.>. Numerical modelling methods have also been developed to estimate the heat production and efficiency of collectors <cit.>. These models account for heat transfer mechanisms such as conduction, convection, and radiation under known plate geometry and operating conditions and serve as useful tools for the theoretical design and optimization of these components. Accurate as they might initially be, the major shortcoming of experimental and numerical methods is that the performance curve of the collector changes over time once the collector is exposed to outdoor conditions. In order to deploy advanced control strategies at scale, we need an approach that does not rely on prior knowledge of the performance curve, but can automatically construct and continuously adapt a model that predicts heat production based on expected external conditions (solar radiation and ambient temperature). Machine learning is a natural fit for such a task, and unsurprisingly, there is a rich literature on applying machine learning to construct predictors for thermal collectors' heat output. In the work presented here, we focus on the additional requirements for the mass-deployment of advanced control strategies in small domestic installations. Besides the continuous adaptation covered by using machine learning in general (presented in Section <ref>), we also aim for a system that relies only on data that can be obtained from relatively low-cost instrumentation and public weather forecasts. To achieve this, we rely on recent developments in deep learning (presented in the second part of Section <ref>) and the adaptation of the machine learning setup to the specifics of the task (Section <ref>). We then present empirical results (Section <ref>) and conclude (Section <ref>). § BACKGROUND §.§ Predicting thermal collector performance As already stated above, machine learning has been extensively applied to construct models of solar thermal and PVT collectors. <cit.> give a comprehensive review, and show that these machine learning exercises are generally successful. Solar radiation and ambient temperature serve are the key inputs in most of these models, unsurprisingly, as physics-based models also depend on these two variables. What is worth noting about the relevant literature, is that it is to a large extent addressing the design phase of collectors or otherwise studying collectors outside the context of controlling solar and auxiliary heating systems. For instance, and moving forward to more recent works, <cit.> compared multi-layer perceptron, RBF, and Elman back-propagation ANNs on the task of modelling the efficiency of solar collectors at various flow rates of the working fluid; whereas <cit.> compared how the use of different working fluids affected the accuracy of ANN models in predicting collector performance. The study demonstrated the effectiveness of ANN in predicting collector performance across all three fluids, highlighting the potential of machine learning as a cost-effective and time-saving alternative to conventional testing and modelling methods. Closer to our work, <cit.> used Multi-Layered Perceptron Neural Networks to predict the thermal performance of a PVT evaporator of a solar-assisted heat pump, demonstrating close alignment with experimental data and also identifying solar radiation and ambient temperature as the primary input variables. <cit.> trained a convolutional neural network on a richer set of inputs (solar radiation, ambient temperature, wind speed, fluid flow rate, and fluid inlet temperature). The developed model showed strong predictive capability in sunny conditions, but the authors do not elaborate on the impact of the extended inputs on the prediction. Additionally, also note that clustering analysis to screen the data and eliminate outliers improves the accuracy of the prediction. §.§ Time representation One of the practical considerations often encountered in timeseries processing is that fully observed, uniformly sampled inputs are almost impossible to gather in realistic conditions. Reasons include gaps in the data, varying sampling rates, and (for multivariate timeseries) misalignment between variables' timing. One approach for facing the reality of irregularly sampled and sparse multivariate timeseries is to first reconstruct a full, uniformly sampled timeseries while another is to directly look for the underlying structure of the data that is available. Reconstruction methods range from simple imputation and aggregation <cit.> to sophisticated interpolation methods <cit.>. Other works adjust the structure of known recurrent neural networks to directly handle irregularly sampled timeseries as input. <cit.> present several methods based on Gated Recurrent Units (GRUs), <cit.> propose to capture time irregularity by modifying the forget gate of an LSTM and <cit.> introduce a new time gate to be utilized within an LSTM. Another, more recent algorithm is proposed by <cit.> assuming a hidden state that evolves according to a linear stochastic differential equation and is integrated into an encoder-decoder framework. <cit.> present the ODE-RNNs, another auto-encode architecture, whose hidden state dynamics are specified by neural ordinary differential equations. These methods, however, fail to learn directly from partially observed vectors, which often occur when dealing with multiple variables. Some early works <cit.> utilize the idea of Gaussian Processes (GPs) to model irregular timeseries. These works first optimize GP parameters and then train the corresponding model. <cit.> achieve end-to-end training for multivariate timeseries by using the re-parametrization trick to back-propagate the gradients through a black-box classifier. Though GPs provide a systematic way to deal with uncertainty, they are expensive to learn and are highly dependent on the chosen mean and covariance functions. To go past these challenges, many researchers have also explored the use of Transformers. Unlike recurrent and differential equation-based architectures, which process inputs sequentially, Transformers use self-attention mechanisms <cit.> to capture relationships between all input positions simultaneously. Although they can be more efficient due to their parallel processing nature, they lack an inherent mechanism for representing temporal order. To address this limitation, a proposed method is to augment the input features with time embeddings. When the data is known or suspected to exhibit periodicity, <cit.> propose using sinusoidal functions that separate periodicity and phase. For instance, adding as features the sine and cosine of (numerical representation of) months, days, hours as input features provides the model with the means to capture periodicity at across different time scales. We shall refer to this model as CycTime. Naturally, this representation is static and assumes prior knowledge of which periodicities make sense for the phenomenon being modelled. As expected, a line of research was developed to enhance the capabilities of attention-based architectures to inherently address the lack of time information. The multi-time attention network (mTAN) <cit.>, is similar to kernel-based interpolation with the difference that the attention-based similarity kernel is learnt. This model acquires the time embeddings from a shallow network that learns both periodic time patterns and non-periodic trends. These embeddings are then utilized by a multi-head attention mechanism that produces an embedding module composed of a set of reference time points and also re-represents the input features in a fixed-dimensional space. This discussion motivates our choice of candidate models: CycTime can take advantage of our prior knowledge of the existence of annual and daily periodicites in solar heat production, while mTAN can discover the interaction between the annual/daily periodicity and the efficiency degradation trend of the collectors. § OPTIMAL CONTROL FOR PVT INSTALLATIONS As mentioned in the Introduction, our application targets the mass-deployment of advanced control strategies in small domestic installations. The high-level description of such an installation and its operation is as follows: * Hot water is produced by solar thermal or by PVT collectors that simultaneously produce thermal and electrical energy. Electricity production in the PVT case is not taken into account for the purposes of the work described here. * Hot water is used for both space heating and to cover domestic hot water demand. Ideally, only water from the solar thermal collector will be used for both usages, but in order to ensure meeting demand the system is also equipped with (less efficient) auxiliary heating systems. * The system decides automatically how to distribute the available hot water between space heating and domestic hot water demand and when to use the auxiliary heating system. Our experimental installation is instrumented with a flow rate meter, as well as inlet and outlet temperature sensors from which hot water production can be quantified.[These sensors are currently installed for our experimental purposes, but similar information can be obtained from low-cost domestic heat meters.] Prior work has used this experimental installation on NCSR Demokritos premises to test and validate this configuration for both office (space heating only) and domestic (space heating and hot water) usage profiles <cit.>. The system is controlled by static summertime and wintertime rules that are tailored to the local climate. In our domestic application, the crucial quantity for these rules is how the expected heat production () compares to the expected domestic hot water (DHW) demand. can be directly calculated from sensor data as a function of the input and output temperatures of the collector and the water flow. Note that although some parameters are hardware dependent, they are constant throughout an installation's life-cycle and do not need to be re-evaluated.[Assuming the mirrors are regularly cleaned.] But it can also be measured through the (experimentally determined) collector efficiency curve that maps external conditions to . Using the collector efficiency curve has the advantage that we can use weather forecasts to predict the expected , but (as already noted) the curve changes as the collector ages and needs to be re-evaluated. What our application achieves is to continuously maintain a machine-learned surrogate of the collector efficiency curve by using the installation's instrumentation to directly calculate , which is then used to label external conditions as provided by local weather reports. There is significant inherent uncertainty in the processes we aim to model, thus it helps to abstract the prediction to bands of values that are meaningful for the given task. Further to this, as the system includes an insulated hot water tank, the timing of the production does not need to closely match the timing of the demand, not even for the morning spike as the previous day's water can be used, if there was an excess. In other words, we are not interested in the exact timing of the production, but only if the system expects to run out of hot water at some point during the day. This gives us the opportunity to reduce uncertainty by abstracting the predictive task to wide bands of heat production over wide time periods, instead of aiming to predict specific values at specific time-points. Since weather forecasts are published for 3h periods, it makes sense to target this granularity in the time dimension: Any finer predictions would simply repeat the same inputs, or rely on interpolations that would add uncertainty; any coarser predictions would have to aggregate the inputs thus reducing the information available to the system. Regarding the granularity at which the value of is predicted, we have tested three alternative approaches: * Balanced ranges, simply separating the range of values into bands of equal width. * Balanced classes, selecting thresholds that give a balanced dataset by simply sorting all values in the training data and cutting into equal sub-arrays. * Maximum margins, selecting thresholds that are as far as possible from expected levels of demand, in order to offer downstream applications robustness to small divergence between expected and actual demand. For the maximum-margins thresholds we referred to the DHW demand profiles that are used to validate energy savings from ecodesign of components <cit.>. We set the thresholds to be exactly between the distinct values appearing in that table. The specific threshold values are given in Table <ref>.[The maximum-margins thresholds in this table are based on the medium profile. It is straightforward to adjust them for different usage profiles.] § EXPERIMENTS AND RESULTS §.§ Experimental Setup Our data was collected from the pilot building at NCSR Demokritos. The building uses PVT for space heating and hot water, with a heat pump as auxiliary heating system. The building is equipped with all necessary sensors to monitor the performance of the various components tested there, including, among others, a pyranometer (for solar radiation), a temperature sensor with radiation shield (for ambient temperature), temperature sensors for water/glycol inlet/outlet temperatures), and flow rate meters (for water/glycol and hot water discharge). Our focus for the work described here is on the part of the test facility that includes 4 PVT collectors (total surface of 8.6 sq.m.) charging a 300-litre water tank. The medium tapping profile is programmed to remove heat from this tank with the use of a controllable valve. From the operation of the pilot building we have acquired one year's worth of data, from which ground-truth can be calculated. For the same period we have also obtained weather data i.e., temperature, humidity levels, pressure, wind speed, accumulated rain and snow, measured for the wider region that includes the installation. We do not use weather forecasts, but actual weather data in order to factor out the effect of weather forecasting uncertainty, but the weather data we used is of the same kind and detail as the forecasts published daily by the National Observatory of Athens. As we mentioned already, weather forecasts are published for 3h periods, so for our dataset we also aggregated accordingly. It is a common occurrence when collecting data from sensors to observe missing values within the dataset due to defective sensors, networking issues etc. In our case, we often have water temperature values falling outside the accepted temperature range of -20^∘C to 100^∘C. As is calculated from multiple water temperatures, it has approximately 15% missing values due to one or more of the variable needed to calculate being missing. Putting everything together, our machine learning task is to predict a time-series of length 8 (because of the 3h aggregation) where each prediction places in one of the 5 classes (value bins) defined by the thresholds in Table <ref>. §.§ Model Architectures To perform the classification task we compare three methods: (a) a conventional RNN approach, (b) the CycTime transformer that uses fixed-time embeddings and (c) the mTAN module. Baseline We include in out experiments a standard timeseries processing approach as the baseline: linear interpolation to fill missing and out-of-range sensor values, RNN, linear classification layer. Transformer with fixed time embedding (CycTime) Our next approaches to tackle the classification problem are based on the idea of Transformers. This first method uses fixed positional time encodings. Regarding the model’s architecture, it consists of an encoder, a decoder, and a classifier. The encoder maps the input into a latent representation, while the decoder reconstructs it, preserving the original dimensionality of its features. The classifier layer then outputs a probability distribution over the five classes, with each element representing the probability of the input belonging to a particular class. Multi-time attention network (mTAN) Adhering to the trajectory of attention-based approaches but employing a methodology based on learning the positional encodings of a timeseries, we apply the mTAN <cit.> module. We deployed this module into an encoder-decoder architecture with the decoder being a single linear classification layer, the same as the one used in the previously discussed architectures. We found that using 32 time-reference points achieved better results. It is expected that the system should be given some freedom to organize time in finer steps than the eight three-hour steps it receives as input, and using a multiple of 8 makes it straightforward to aggregate the outputs back to three-hour steps. We found empirically that 32 steps and using mean as an aggregator works best. All models were trained using multi-class cross-entropy as the loss function, with an implemented mechanism to mask unobserved predicted labels. Figure <ref> presents the models' training loss progression over epochs. Note that although the CycTime model appears to have the longest training time per epoch (10 times longer than the mTAN model), it requires 10 times fewer epochs on average to converge. Consequently, both models require roughly the same overall training time, which is 13-15 minutes on a single NVIDIA RTX 5000 Ada Generation GPU. This level of computational load should not pose any challenge for monthly or even weekly model updates. §.§ Results and Discussion We evaluated the performance of each model on the final two days of every month, data that we had excluded from the training set. The non-domain-specific evaluation metrics, which capture the micro, macro, and weighted recall, precision, and F-score achieved by the models in each predictive task, are presented in Table <ref>. The micro average calculates the overall performance across all classes, while the macro average computes the average score across all classes, treating each class equally, and the weighted measurements calculate each metric with each class weighted by its support. As anticipated, the RNN model exhibits poorer performance across all tasks, while the other attention-based models compete closely with each other. Regarding the choice of value quantization, there does not seem to be any dramatic advantage in class-balancing. This affords us the flexibility to prefer the max-margins quantization, which offers an advantage for downstream processing. For this reason, we will continue the discussion using results from the max-margins quantization. Besides knowing which is the most accurate model, it is also important to analyse their behaviour when they err. Since the task is one of inherent uncertainty, we expect errors and we expect that the downstream controller will be aware of the possibility of errors and behave accordingly. This means that we are interested in the graceful degradation of model accuracy so that the controller is not completely thrown off and in having models that do not fail with confidence, so that the control can trust high-confidence predictions. Failing with confidence Figure <ref> shows the confidence with which the different models make predictions. Specifically, it displays the percentage of test instances that were classified correctly or misclassified with respect to different confidence margins of the prediction. Each prediction represents the distribution of likelihoods among five different value ranges. As a measure of confidence, we denote the likelihood difference between the best guess and the second best guess. Ideally, we would like the likelihood mass for correctly predicted examples to be concentrated on a single output, while the likelihood mass for misclassified samples to be spread across the five different classes. Simply put, model confidence is a positive quality only when its predictions are also accurate. Although the RNN model exhibits the desired confidence behaviour, its accuracy is worse than that of the other two models. The mTAN architecture has better accuracy than RNN, but with a similar behaviour when failing. On the other hand the CycTime model, although more accurate than the other models, when it fails it fails with confidence. Distance between true and predicted class Figure <ref> shows the distance between the predicted class and the ground-truth class for the different models. There are no reversals of the relative ordering of the three models, so as far as graceful degradation is concerned CycTime is clearly preferred. Morning errors Most errors occur between 7am and 10am, with the practically all of rest falling between 10am and 7pm (cf. Table <ref>). The high concentration of errors during the 7-10am period can be attributed to the morning's dual nature, resembling both day and night depending on the season. For instance, summer mornings are sunny, resembling daytime, while winter mornings may still appear dark, resembling night-time. It seems natural that CycTime is able to handle this slightly better than the rest, since it is equipped with the means to represent the annual cycle as priors. We should remind at this point that we cannot change the time aggregation in order to mitigate this, as it based on how weather forecasts are published. § CONCLUSIONS AND FUTURE WORK We have presented an application of the state-of-the-art in Transformer models to the high-impact domain of renewable energy. Specifically, we applied Transformer models to predict heat production from solar collectors to identify whether it is adequate to cover the demand and, ultimately, to minimize the usage of (less efficient) auxiliary heating systems. Based on the analysis presented above, we have decided to install the mTAN model based on both theoretical and empirical reasons. From the theoretical point of view, mTAN should be able to separate the annual/daily cycles from the longer trend of collector efficiency deterioration. We will revisit this empirically once the collectors at our installation have aged enough to see perceptible changes, but having this capability supports further applications such as predictive maintenance. From the empirical analysis, we have observed that having a static prior time representation gives a noticeable but not impressive advantage in prediction accuracy. However, mTAN has the advantage that is less prone to failing with confidence: When it fails, mTAN tends to have called a prediction on a slim difference in likelihood, whereas the CycTime tends to have much higher confidence in success as well as in failure. mTAN's behaviour is more appropriate for our application, since the controller can be configured to use the more conservative or the more optimistic predictions, depending on the expected demand and on the current condition of the system's hot water tanks. With the mTAN predictor installed and actively in use, we will now be able to validate the actual energy gains by comparison to the conventional (season-based) control previously used. This will allow us to experiment with different policies on how often to re-train the model and, in general, to validate the complete concept of using installation's sensor data to have the system automatically adapt. In the analysis presented here we have used actual weather conditions instead of weather forecasts, as the focus was to see how well we can model the installation's internal state. In future work, we will investigate the combined effect of weather forecasting uncertainty and model uncertainty when controlling conservatively or aggressively. Finally, in further future work we plan to revisit the smart control strategy with the aim to define a more dynamic strategy that can absorb these uncertainties. Such a strategy would exploit the system's internal state (e.g., the current charge of the hot water tanks) and the knowledge of the margins of uncertainty of future demand, weather forecasts, and the heat production predictor to balance these uncertainties into an control that is optimal in the long run. This research has been co-financed by the European Union and Greek national funds through the program Flagship actions in interdisciplinary scientific areas with a special interest in the connection with the production network — GREEN SHIPPING — TAEDR-0534767 (Acronym: NAVGREEN). For more information please visit <https://navgreen.gr> 26 urlstyle [Cadafalch(2009)]cadafalch:2008 J. Cadafalch. A detailed numerical model for flat-plate solar thermal devices. Solar Energy, 83, 2009. doi:10.1016/J.SOLENER.2009.08.013. [Che et al.(2018)Che, Purushotham, Cho, Sontag, and Liu]che_etal Z. Che, S. Purushotham, K. Cho, D. Sontag, and Y. Liu. Recurrent neural networks for multivariate time series with missing values. Scientific Reports, 8, 04 2018. [Du et al.(2022)Du, Lund, and Wang]du-etal:2022 B. Du, P. D. Lund, and J. Wang. Improving the accuracy of predicting the performance of solar collectors through clustering analysis with artificial neural network models. Energy Reports, 8, Nov. 2022. doi:10.1016/J.EGYR.2022.03.013. [European Commission()]eu_load_profiles European Commission. Regulation (EU) no 814/2013. OJ L 239 6.9.2013, p. 162, 2013. URL <http://data.europa.eu/eli/reg/2013/814/2017-01-09>. Ecodesign requirements for water heaters and hot water storage tanks. [Futoma et al.(2017)Futoma, Hariharan, and Heller]futoma_etal J. Futoma, S. Hariharan, and K. Heller. Learning to detect sepsis with a multitask gaussian process rnn classifier. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 1174–1182. JMLR.org, 2017. [Ghritlahre and Prasad(2018)]ghritlahre-etal:2018 H. Ghritlahre and R. Prasad. Application of ANN technique to predict the performance of solar collector systems — a review. Renewable and Sustainable Energy Reviews, 84, 2018. doi:10.1016/J.RSER.2018.01.001. [Gunasekar et al.(2015)Gunasekar, Mohanraj, and Velmurugan]gunasekar-etal:2015 N. Gunasekar, M. Mohanraj, and V. Velmurugan. Artificial neural network modeling of a photovoltaic-thermal evaporator of solar assisted heat pumps. Energy, 93, 2015. 10.1016/J.ENERGY.2015.09.078. [Khelifa et al.(2016)Khelifa, Touafek, Ben Moussa, and Tabet]khelifa-etal:2016 A. Khelifa, K. Touafek, H. Ben Moussa, and I. Tabet. Modeling and detailed study of hybrid photovoltaic thermal (PV/T) solar collector. Solar Energy, 135, 2016. doi:10.1016/J.SOLENER.2016.05.048. [Li and Marlin(2015)]li_etal S. C.-X. Li and B. Marlin. Classification of sparse and irregularly sampled time series with mixtures of expected gaussian kernels and random features. In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, UAI'15, page 484–493, Arlington, Virginia, USA, 2015. AUAI Press. [Lipton et al.(2016)Lipton, Kale, and Wetzel]lipton_etal Z. C. Lipton, D. Kale, and R. Wetzel. Directly modeling missing data in sequences with rnns: Improved classification of clinical time series. In Proceedings of the 1st Machine Learning for Healthcare Conference, volume 56 of Proceedings of Machine Learning Research, pages 253–270, Northeastern University, Boston, MA, USA, 18-19 Aug 2016. PMLR. [Lu et al.(2008)Lu, Leen, Huang, and Erdogmus]lu_etal Z. Lu, T. K. Leen, Y. Huang, and D. Erdogmus. A reproducing kernel hilbert space framework for pairwise time series distances. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, page 624–631, New York, NY, USA, 2008. Association for Computing Machinery. [Marlin et al.(2012)Marlin, Kale, Khemani, and Wetzel]marlin_etal B. M. Marlin, D. C. Kale, R. G. Khemani, and R. C. Wetzel. Unsupervised pattern discovery in electronic health care data using probabilistic clustering models. In Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium, page 389–398, New York, NY, USA, 2012. Association for Computing Machinery. [Meramveliotakis et al.(2022)Meramveliotakis, Kosmadakis, Pilou, and Karellas]gg-etal:2022a G. Meramveliotakis, G. Kosmadakis, M. Pilou, and S. Karellas. Testing a flexible configuration of a solar-assisted heat pump with PVT collectors for domestic hot water production. In Proceedings of EuroSun 2022: ISES and IEA SHC International Conference on Solar Energy for Buildings and Industry, Freiburg, Germany, January 2022, 2022. 10.18086/eurosun.2022.08.07. [Mirzaei and Mohiabadi(2021)]mirzaei-mohiabadi:2021 M. Mirzaei and M. Mohiabadi. Neural network modelling for accurate prediction of thermal efficiency of a flat plate solar collector working with nanofluids. International Journal of Ambient Energy, 420 (2), 2021. doi:10.1080/01430750.2018.1525576. [Neil et al.(2016)Neil, Pfeiffer, and Liu]neil_etal D. Neil, M. Pfeiffer, and S.-C. Liu. Phased lstm: Accelerating recurrent network training for long or event-based sequences. In Advances In Neural Information Processing Systems, pages 3882–3890, 2016. [Pham et al.(2017)Pham, Tran, Phung, and Venkatesh]pham_etal T. Pham, T. Tran, D. Phung, and S. Venkatesh. Predicting healthcare trajectories from medical records: A deep learning approach. Journal of Biomedical Informatics, 69:0 218–229, 2017. ISSN 1532-0464. [Pilou et al.(2022)Pilou, Kosmadakis, Meramveliotakis, and Krikas]gg-etal:2022b M. Pilou, G. Kosmadakis, G. Meramveliotakis, and A. Krikas. Towards a 100% renewable energy share for heating and cooling in office buildings with solar and geothermal energy. Solar Energy Advances, 2, 2022. 10.1016/J.SEJA.2022.100020. [Rojas et al.(2009)Rojas, Beermann, Klein, and Reindl]rojas-etal:2008 D. Rojas, J. Beermann, S. Klein, and D. Reindl. Thermal performance testing of flat-plate collectors. Solar Energy, 820 (8), Aug. 2009. doi:10.1016/J.SOLENER.2008.02.001. [Rubanova et al.(2019)Rubanova, Chen, and Duvenaud]rubanova_etal Y. Rubanova, R. T. Q. Chen, and D. Duvenaud. Latent odes for irregularly-sampled time series. In Advances In Neural Information Processing Systems, volume 32, page 5320–5330. Curran Associates, Inc., 2019. [Sadeghzadeh et al.(2019)Sadeghzadeh, Ahmadi, Kahani, Sakhaeinia, Chaji, and Chen]sadeghzadeh-etal:2019 M. Sadeghzadeh, M. H. Ahmadi, M. Kahani, H. Sakhaeinia, H. Chaji, and L. Chen. Smart modeling by using artificial intelligent techniques on thermal performance of flat‐plate solar collector using nanofluid. Energy Science and Engineering, 70 (5), June 2019. doi:10.1002/ese3.381. [Schirmer et al.(2022)Schirmer, Eltayeb, Lessmann, and Rudolph]mona_etal M. Schirmer, M. Eltayeb, S. Lessmann, and M. Rudolph. Modeling irregular time series with continuous recurrent units. In Proceedings of the 39th International Conference on Machine Learning, 17-23 Jul 2022, volume 162, 2022. [Shukla and Marlin(2019)]shukla_marlin S. N. Shukla and B. Marlin. Interpolation-prediction networks for irregularly sampled time series. In International Conference on Learning Representations, 2019. [Shukla and Marlin(2021)]mTAN S. N. Shukla and B. Marlin. Multi-time attention networks for irregularly sampled time series. In International Conference on Learning Representations, 2021. [Vaswani et al.(2017)Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, and Polosukhin]og_attention A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. [Wen et al.(2023)Wen, Zhou, Zhang, Chen, Ma, Yan, and Sun]wen-etal:2023 Q. Wen, T. Zhou, C. Zhang, W. Chen, Z. Ma, J. Yan, and L. Sun. Transformers in time series: A survey. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, Survey Track. Macao, 19–25 August 2023, 2023. 10.24963/ijcai.2023/759. [Yoon et al.(2017)Yoon, Zame, and van der Schaar]yoon_2017_etal J. Yoon, W. R. Zame, and M. van der Schaar. Estimating missing data in temporal data streams using multi-directional recurrent neural networks. IEEE Transactions on Biomedical Engineering, 66:0 1477–1490, 2017.
http://arxiv.org/abs/2405.10014v1
20240516115852
Frequency-Domain Refinement with Multiscale Diffusion for Super Resolution
[ "Xingjian Wang", "Li Chai", "Jiming Chen" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Zhejiang University, Hangzhou, China {xingjianwang,chaili,cjm}@zju.edu.cn Frequency-Domain Refinement with Multiscale Diffusion for Super Resolution Xingjian Wang1 Li Chai*1 Jiming Chen1 May 20, 2024 ========================================================================== The performance of single image super-resolution depends heavily on how to generate and complement high-frequency details to low-resolution images. Recently, diffusion-based models exhibit great potential in generating high-quality images for super-resolution tasks. However, existing models encounter difficulties in directly predicting high-frequency information of wide bandwidth by solely utilizing the high-resolution ground truth as the target for all sampling timesteps. To tackle this problem and achieve higher-quality super-resolution, we propose a novel Frequency Domain-guided multiscale Diffusion model (FDDiff), which decomposes the high-frequency information complementing process into finer-grained steps. In particular, a wavelet packet-based frequency complement chain is developed to provide multiscale intermediate targets with increasing bandwidth for reverse diffusion process. Then FDDiff guides reverse diffusion process to progressively complement the missing high-frequency details over timesteps. Moreover, we design a multiscale frequency refinement network to predict the required high-frequency components at multiple scales within one unified network. Comprehensive evaluations on popular benchmarks are conducted, and demonstrate that FDDiff outperforms prior generative methods with higher-fidelity super-resolution results. § INTRODUCTION Super-resolution (SR) for single image is a crucial task and has attracted constant research interests, which plays a vital role in enhancing the quality of low-resolution (LR) images for various downstream tasks. From a frequency-domain perspective, the natural or artificial degradation process causing LR images can be regarded as an extensive low-pass filtering on the corresponding high-resolution (HR) images, resulting in a significant loss of high-frequency details. Hence the main difficulty of reconstructing high-quality HR images lies in the restoration of the missing high-frequency information. Recently, with the continuous innovation of deep learning techniques, there has witnessed various super-resolution methods. These methods can be split into two categories, namely regression-based methods and generative methods. Among them, regression-based methods<cit.> directly learn the LR-to-HR mapping. Although achieving relatively small pixel-wise errors, regression-based methods suffer from low perceptual quality with insufficient high-frequency details. Instead, generative methods<cit.> have seen success in generating realistic details, which leverage the prior distribution learned from training dataset for super-resolution. However, they also have various limitations. For instance, GAN-based SR methods<cit.> are prone to unstable training and mode collapse, while Normalizing Flow-based<cit.> and VAE-based<cit.> SR methods suffer from less visually pleasing SR results. As a recent member of generative methods, diffusion-based SR methods<cit.> have made encouraging progress in generating high-quality SR images. While there are diverse interpretations of super-resolution, the frequency-domain perspective of this paper on super-resolution involves complementing the missing high-frequency information to LR images. Existing diffusion-based SR methods<cit.> mainly set the HR ground truth as the sole target throughout the whole reverse diffusion process without any intermediate targets or guidance. As a result, these methods have great difficulties to directly predict high-frequency information within a relatively wide bandwidth all at once, especially for large upscaling factors. As illustrated in DDIM<cit.>, the diffusion forward process can be a non-Markovian process as long as the conditional distribution of the latent variable at time t conditioned on its initial state remains the same with the one in DDPM<cit.>. Therefore, it is recommended to construct a non-Markovian forward process to provide prior frequency-domain guidance for the reverse diffusion SR process. To address this problem, we propose a novel Frequency Domain-guided multiscale Diffusion model (FDDiff) to progressively complement the missing high-frequency components in LR images. Avoiding the difficulties of the direct transition from a simple Gaussian distribution to the complex HR image distribution in prior works<cit.>, we propose a wavelet packet-based frequency complement chain to additionally introduce easily accessible distributions to the diffusion reverse process. These intermediate distributions can be regard as temporally varying intermediate targets with progressively refined high-frequency information and hierachical scales. Based on the frequency complement chain, the corresponding multiscale frequency refinement diffusion is proposed to progressively predict the missing high-frequency information within a small local bandwidth step by steps, conditioned on LR images. Besides, a multiscale frequency refinement network with soft parameter-sharing is designed to handle latent variables of multiple sizes in multiscale diffusion process simultaneously within one network. We evaluate FDDiff on various super-resolution tasks from face images<cit.> to general images<cit.>. Experimental results demonstrate that FDDiff outperforms other state-of-the-art generative methods, achieving high super-resolution quality both for regression-oriented and perception-oriented metrics. § RELATED WORK Diffusion Models for Super-Resolution With the rapid development of diffusion probabilistic models<cit.>, diffusion-based methods have exhibited impressive high SR quality. Adapting conditional diffusion model for super-resolution, SR3<cit.> and SRDiff<cit.> both iteratively denoised and transformed the latent variable sampling from simple Gaussian distribution to the HR image space with complex distributions, conditioned on LR input image. Based on SR3 and SRDiff, IDM<cit.> exploited implicit neural representation to learn SR with continuous upscaling ratios. Inspired by WaveDiff<cit.>, DiWa<cit.> further denoised in Wavelet domain for faster sampling. However, they treat the super-resolution process merely as a denoising process with the straightforward application of conditional diffusion, and unavoidably encountered the difficulty to transform the simple Gaussian distribution into the complex distribution of HR images. Besides, taking the HR ground truth as the sole target for all timesteps incurs additional computational and memory cost, since the sizes of noised images in diffusion process remain the same as the input images. Instead, we introduce the temporally varying intermediate targets with hierachical scales to diffusion process, and guide the diffusion model to progressively restore the missing high-frequency information for LR images. Super-Resoltuion in Frequency Domain Nowadays, conducting super-resolution tasks in wavelet-based frequency domain has attracted growing attention, and Wavelet Transform can benefit many applications in computer vision field. Several wavelet-based super-resolution works have been proposed, including Wavelet-SRNet<cit.>, SRCliqueNet<cit.>, and FSR<cit.>. Among them, Wavelet-SRNet<cit.> ultilized convolutional network to complement the wavelet high-frequency coefficients for LR images, while SRCliqueNet<cit.> and FSR<cit.> focused on design elaborate network to extract superior features fused from the four sub-bands of wavelet transform. However, as regression-based models, they are limited to achieve high perceptual quality due to the absence of randomness guidance inherent in generative models. Moreover, they merely employ wavelet to decompose images into four sub-bands and thus lack sufficiently detailed partitioning in frequency domain. Observed that Wavelet Packet-based Transform (WPT) has much more fine-grained frequency partition than Wavelet Transform <cit.>, we exploit WPT to transition HR images into states with different frequency components, and incorporate the progressive frequency refinement process into diffusion process for higher SR quality. § METHOD In this section, we describe our proposed FDDiff in detail. The overall architecture is illustrated in Fig. <ref>, while the training and inference algorithms are presented in Alg. <ref> and Alg. <ref> respectively. §.§ Frequency Complement Chain We design a high-frequency complement chain based on wavelet packet. Firstly, HR images are transitioned into sub-images of varying bandwidths. Then these sub-images are assigned to different timesteps through non-linear interpolation to provide high-frequency targets. Hence, diffusion model is guided to predict the missing high-frequency details within a small local bandwidth step by step. Frequency Component Extraction To design a frequency complement chain, we first extracted frequency components of different bandwidths from images. Let {x, y} be a given HR-LR image pair, where x∈ℝ^3× H× W represents the expected high-resolution output images, y∈ℝ^3×H/2^p×W/2^p represents low-resolution images, and 2^p denotes upscaling factor. Firstly, x is iteratively convolved with the basis vectors of Wavelet Packet Transform (WPT) to be projected onto frequency domain at increasing stage s, resulting in sub-image x_s with more fine-grained frequency components along channel dimension. To be specific, given pre-stage sub-image x_s-1, which utilizes a set of four fixed and even-sized convolution kernels 𝒦={h_D, h_H, h_V, g_L} with stride = 2 to divide the frequency bands of the image into four parts, i.e., low-frequency part by g_L, horizontal high-frequency part by h_H, vertical high-frequency part by h_V, and diagonal high-frequency part by h_D respectively. Haar wavelet kernels are selected as 𝒦 in this paper. As a result, the obtained sub-image x_s have double downsampling scale and frequency components of narrower bandwidth compared to x_s-1 Such a quad-tree subband decomposition process ℱ(·) can be defined as follows, x_s = ℱ(x;𝒦,s) =x_i=1^s 𝒦≜x⊛𝒦⊛𝒦⊛…⊛𝒦_s times , x⊛𝒦= (x∗h_D) ⊕(x∗h_H)⊕(x∗h_V)⊕(x∗g_L), where ⊛ denotes decomposition operation, and ⊕ denotes concatenation operation along channel dimension. The reverse process ℱ^-1 of ℱ can be written as, x=ℱ^-1(x_s;𝒦',s)=f^-1(⋯ f^-1(x_s)⋯)_s times f^-1(·), where f^-1(x_s)=𝒫_↑ 2(x_s)⊛𝒦' and x_s-1=𝒫_↑ 2(f^-1(x_s)). 𝒫_↑ 2 denotes PixelShuffle<cit.> with upsampling factor=2. 𝒦' is the set of four fixed inverse kernels. Multiscale Frequency Degradation Pyramid Based on the obtained sub-image x_p by p-times decomposition, we first build a multiscale degradation pyramid with cascaded 2× upscaling. The channels of x_p are split into 4^p parts of different frequency coefficients and sorted by their frequency, resulting in x_p = [x_p^4^p,x_p^4^p-1,...,x_p^2,x_p^1] with frequency descending along the index. Among them, x_p^1 denotes the low-frequency part of x_p, while the remaining ones are our intermediate targets. We define the low-pass operation ℒ(x_p;i,j) to filter out high-frequency components conditioned on the given i and j, where i and j both are given frequency indexes from the set of integers {1,2,3,...,4^p}. ℒ(x_p;i,j) equals to x_p^i when i≤ j, otherwise it equals to zero. Then the 4^p-state multiscale sub-image degradation pyramid for x is defined as follows, x̃_j=𝒟(x_p,j)≜ℱ^-1(∑_i=1^4^pℒ(x_p;i,j);𝒦',⌈log_4(j)⌉), where ⌈·⌉ represents the ceiling function and x̃_1=x∈ℝ^3× H× W. In this way, the pyramid degrades x̃_1=x at 1st state to LR image x̃_4^p=x_p^1∈ℝ^3×H/2^p×W/2^p at 4^p-th state. Furthermore, the aforementioned 4^p states of x̃_j should be assigned to t∈[1,T] timesteps. Since the low-frequency components x̃_j with lower j retaining larger energy are harder to be restored, it is suggested to lead the diffusion model to sample more timesteps for lower index j. We therefore design the timestep assignment schedule based on power function, which can be expressed as, j = ⌈ 4^p(1-(T-t)^2/T^2)⌉. Then the frequency degradation pyramid over T timesteps can be derived from the sub-image pyramid as follows, x̂_t ≜(t-t_j-1)x̃_j+(t_j-t)x̃_j-1/t_j-t_j-1 t_j-1<t≤t_j, 2≤j<4^p, (t-t_j)y+(T-t)x̃_4^p/T-t_j t_j< t ≤ T, j=4^p, where the key timestep t_j corresponding to state transition from t_j to t_j-1 is set to t_j = ⌊ T-T√(1-j-1/4^p)⌋. As for the boundary case, x̂_1 = x while t=t_1=1, t∈[1,T]. Although the LR degradation of y remains unknown, the frequency components of y are encompassed within the low-frequency part of x̃_1. Hence, the proposed chain first convert the frequency bandwidth of y to the same as x̃_4^p, as shown in Eq. <ref> while t_j< t ≤ T. It is worth noting that the x̂_t undergoes scale changing from t=t_j to t=t_j-1 for those j=4^p+1-4^q, while q∈{0,1,2,...,p}. As a result, x̂_t of all timesteps forms a multiscale pyramid instead of a fixed-size chain, which will effectively reduce the memory and computation cost. Accordingly, the frequency complement chain from y to x is given by the reverse process of this frequency degradation pyramid, where the missing high-frequency component η_t = x̂_t_j-1 - x̂_t, t_j-1< t ≤ t_j is provided as additional targets for diffusion model. §.§ Multiscale Frequency Refinement Diffusion Under the guidance of frequency complement chain, the non-Markovian diffusion process with multiscale latent variables is proposed to generate the missing high-frequency details for LR images. Forward Process Inspired by DDIM<cit.> and f-DM<cit.>, we design a noised latent variable u_t with t∈[1,T] whose conditional distribution q(u_t|u_0) conforming to the one defined in DDPM<cit.>. u_t is directly sampled around x̂_t by q(u_t|x̂_t,y)=𝒩(u_t;√(α_t)x̂_t,(1-α_t)I), where the noise hyperparameter α_t defined in <cit.> follows the cosine schedule in <cit.>. α_0=1 and α_t monotonically decreases to zero over time. Evidently, u_1=x̂_1=x and u_T∼𝒩(0,1/4^pI). To meet the marginal distribution, the non-Markovian diffusion forward process is given by, u_t =√(α_t/α_t-1)u_t-1↓_γ(t)-η_t√(α_t)/t_j-t_j-1+√(1-α_t/1-α_t-1)ϵ_t, where t_j-1< t ≤ t_j and ϵ_t denotes the added Gaussian noise for every forward step. For smooth transition between timesteps of different scales, the downsampling operation ↓_γ(t) with integer factor=γ(t) is added between the scale-changing timesteps. That is, γ(t) equals to 2 if t=t_j and j=4^q, q∈{0,1,2,...,p}, otherwise γ(t) equals to 1. As for Haar wavelet, average downsampling is selected, which is equal to the convolution with g_L in ℱ. As a result of average downsampling, the variance of ϵ_t∼𝒩(0,I) decreases when the current scale changes. Accordingly,ϵ_t is sampled from 𝒩(0,1/4^kI) while k=p-⌈log_4(4^p+1-j)⌉. Reverse Process Since our forward process is non-Markovian and the conditional distribution of u_t conforms to the one in DDPM, we design the reverse process based on DDIM<cit.> to predict u_t-1 from u_t with the help of an UNet-style<cit.> multiscale network. Specifically, the network with parameters θ is employed to estimate the missing frequency components η_t with η_θ and the added noise ϵ_t with ϵ_θ simultaneously. For the hyperparameter σ_t^2 controlling the stochastic magnitude in DDIM, σ_t^2 is set to 1-α_t-1 according to our forward process. Consequently, the reverse process of the multiscale diffusion process can be derived as, { u_t-1=√(α_t-1)h_θ,t(u_t,ϵ_θ,η_θ)↑_γ(t)+√(1-α_t-1)ϵ_t-1, h_θ,t(u_t,ϵ_θ,η_θ) =u_t-√(1-α_t)ϵ_θ/α_t+η_θ/t_j-t_j-1, ϵ_θ,η_θ = Net_θ(u_t,y,t), . where Net_θ^q(·) denotes the network predicting ϵ_θ and η_θ from u_t with conditional input y and t. The Net_θ^q(·) has q states with q scales, and q is defined as p-⌈log_4(4^p+1-j)⌉ for corresponding t. Function h_θ,t(·) conducted by network is utilized to denoise and recover high-frequency components of the local narrow bandwidth from x_t to x_t-1. Symmetric to the downsampling operation in forward process, the upsampling operation ↑_γ(t) with factor γ(t) is also added to the reverse process. According to ℱ^-1 of Haar wavelet, the nearest upsampling is chosen. Training Objective For timestep t_j-1< t ≤ t_j at j-th state, generating the noiseless pre-state x̂_t_j-1=x̃_j-1 from u_t is regarded as the training objective. Noting that the estimation of x̃_j-1 can be derived as u_t-√(1-α_t)ϵ_θ/α_t+η_θ, the L1 loss function ℒ(θ) is defined as follows, ℒ(θ)=(𝐱, 𝐲)𝔼j𝔼(t_j-1<t≤ t_j𝔼α_t/1-α_tu_t-√(1-α_t)ϵ_θ/α_t+η_θ-x̃_j-1_1), where α_t/1-α_t denotes the SNR weighting coefficients in <cit.>. The corresponding training and sampling algorithms for the aforementioned forward and reverse process are presented in Algorithm <ref> and Algorithm <ref> respectively. §.§ Multiscale Frequency Refinement Network Since the proposed wavelet packet-based frequency complement chain enable the multiscale diffusion process, the corresponding frequency refinement network is expected to predict ϵ_θ and η_θ of different scales with conditional input y and t simultaneously within one network. Inspired by f-DM<cit.>, the U-Net architecture in DDPM<cit.> inherently reveals that the multiscale features of network just correspond to multiscale latent variables. Consequently, we propose the multiscale frequency refinement network Net_θ with p+1 states for 2^p super resolution, which utilizes soft parameter-sharing to switches between different states and thus simultaneously handle diffusion inputs and outputs at different scales, as shown in Fig. <ref>. Specifically, in encoder, each block consists of three parts, including two residual blocks<cit.>, a self-attention layer<cit.>, and a 2× downsampling layer. As for decoder, the downsampling layer is replaced with 2× upsampling layer. An exception is made for the last blocks of both the encoder and the decoder, whose downsampling or upsampling layers are removed. PixelShuffle<cit.> method is used for downsampling and upsampling layers. For input and output images at different scales, the corresponding input layers and output layers are added before or after the blocks of encoder and decoder respectively. These input and output layers are all stacked by one residual block and a self-attention layer. As for conditional input, LR image y is concatenated with the input noised image at the channel dimension, and timestep t is projected by linear layer to conduct affine transformation on features of residual blocks following <cit.>. The soft parameter-sharing schedule can be described as that the different states of Net_θ share a portion of the network parameters. Specifically, supposing that the encoder and decoder of Net_θ both are composed of p+1 blocks, Net_θ^i at i-th state will unfreeze the parameters from the i-th block of encoder to the (p+2-i)-th block of the decoder and remains parameters of other blocks frozen. In this way, the (p+1)-th scales of input u_t and output (ϵ_θ, η_θ) at i-th state correspond with the one of x̂_t in frequency complement chain and the one of u_t in diffusion process. § EXPERIMENTS §.§ Datasets Two commonly used experiment settings are employed for performance evaluation on super-resolution task, which involve face images and general images respectively. Face Images For large-scale face images, FDDiff is trained on Flickr-Face-HQ(FFHQ)<cit.> and evaluated on CelebA-HQ <cit.> following <cit.>. The images of FFHQ and CelebA-HQ are all at 1024×1024 resolution, which are resized to 128× 128 as fixed input size. Following <cit.>, super resolution experiments with upscaling ratios 8× are conducted for face images, which means face super-resolution for 16×16→128×128. For evaluation on CelebA-HQ, to maintain consistency with code implementation of SR3<cit.> and IDM<cit.>, a smaller part of CelebA-HQ, namely 1,000 images in this paper, are randomly sampled. General Images As for super-resolution on general images, following the experiment settings of <cit.>, FDDiff is trained on the training set of DIV2K<cit.>, and evaluated on four datasets, including DIV2K validation set, BSDS100<cit.>, General100<cit.>, and Urban100<cit.>. Since images of DIV2K have variable sizes, following the standard schedule in <cit.>, 48×48→192× 192 patch pairs are extracted from DIV2K training set for training on 4× SR task. For inference process, since the network of FDDiff is built without fixed positional encoding or time encoding, FDDiff can infer on full-size images after stretching these images to the closest square size divisible by 2^p. §.§ Implementation Details Training Details The proposed FDDiff is trained from scratch on RTX3090s with about 1M steps for face images following SR3 and 0.5M steps for general images. The batch size is setting to 64 for input size 128× 128 and 32 for input size 256× 256 according to the limit on maximum memory usage. AdamW is adopted as the optimizer with 5× 10^-4 initial learning rate and 10^-2 weight decay. The total sampling timestep is set to 1,000 for face images and 640 for general images, and the hyperparameter α follows the cosine scheduler in <cit.>. For pre-processing in training process, resized images or patches are normalized to [-1,1], and then are randomly horizontal and vertical flipped for data augmentation. We utilize U-Net<cit.> architecture with embedded residual block<cit.> and self-attention layer<cit.> as the network framework for FDDiff. For p=3 with upscaling factor 2^p=8, the channel dimensions of p+1=4 encoder blocks are [64,128,256,128]. And the channel dimensions of p+1=3 encoder blocks for p=2 are [256,256,128]. The code will be released upon acceptance of the paper. Evaluation Metrics Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM)<cit.>, and Learned Perceptual Image Patch Similarity (LPIPS)<cit.> are used to evaluate the super resolution quality. Among them, PSNR is directly utilized to assess the Mean Square Error(MSE) between the pixel values of SR and HR image pairs. Instead of relying on numerical comparison, SSIM focuses on human visual perception and thus takes into account of the similarities in luminance, contrast, and structure simultaneously. As a perception-based metric, LPIPS utilizes pre-trained network to compute the similarity between the activations of HR and SR image pairs, for which SqueezeNet<cit.> pre-trained on ImageNet<cit.> dataset is employed in this paper. §.§ Comparisons Large-scale Face Super-Resolution Following SR3<cit.>, we first evaluate FDDiff and conduct comparisons with other stage-of-the-art generative methods through training on 70,000 images of FFHQ and evaluating on CelebA-HQ, as shown in Table <ref>. The involved generative methods include GAN-based models, FSRGAN<cit.>, PULSE<cit.>, and recent diffusion-based models, such as SR3<cit.>, DiWa<cit.>, and IDM<cit.>. As can be observed from Table <ref>, FDDiff outperforms previous generative methods both in terms of numerical comparison-based metric PSNR and perceptual metric SSIM. Among these diffusion-based methods, DiWa also exploits to conduct super-resolution task by generating high-frequency information. However, it has neither explicitly combined the high-frequency complement process with diffusion process, nor divided the high-frequency components into the ones of narrower frequency bands. As a result, FDDiff surpasses it by a large margin, namely 5.06% in PSNR and 5.97% in SSIM. While compared with the up-to-date diffusion-based IDM, FDDiff obtains better results with 0.51dB higher in PSNR. We also conduct further comparison with diffusion-based methods SR3 and IDM for additional perceptual metrics ManIQA<cit.> and MUSIQ<cit.>. Frozen networks for ManIQA and MUSIQ are pretrained on PIPAL and AVA datasets respectively. Results in Table <ref> have shown the superiority of FDDiff in perceptual quality. For qualitative comparison, as it can be observed in Figure <ref>, the generated faces of FDDiff conditioned on LR images contain more realistic facial features, e.g., eyes, teeth, ears, and noses. Although may not being as sharpened as SR3 and IDM, the results of FDDiff contain facial fine-grained details with higher fidelity to the ground truth. Especially for eyes, ears, and teeth, there is a large discrepancy between the results of SR3/IDM and HR images, while FDDiff preserves higher consistency with HR images. General Image Super-Resolution To demonstrate the super-resolution quality of FDDiff for general images, FDDiff trained on DIV2K training set is evaluated on DIV2K validation set. The super-resolution quality is assessed by PSNR and SSIM, and is compared with various models, including: regression-based models EDSR<cit.> and LIIF<cit.>; VAE-based models LAR-SR<cit.>; GAN-based models ESRGAN<cit.> and RankSRGAN<cit.>; flow-based models SRFlow<cit.> and HCFlow<cit.>; diffusion-based models SRDiff<cit.>, DiWa<cit.>, and IDM<cit.>. As shown in Table <ref>, FDDiff achieves competitive numerical SR quality and outstanding perceptual SR quality, outperforming previous by at least 0.14dB for PSNR and 11.4% for SSIM. It reveals that FDDiff prioritizes human perceptual quality, which is achieved through the progressive refinement for the missing high-frequency information of LR images. Besides, aiming to illustrate the cross-domain performance of FDDiff for general image super-resolution (4×), our proposed FDDiff trained on DIV2K is further evaluated on three datasets including BSDS100<cit.>, General100<cit.>, and urban100<cit.> dataset. Several state-of-the-art generative models are involved in comparison. Quantitative results in Table <ref> demonstrate that FDDiff outperforms the baseline in SSIM and yields comparable LPIPS for these three cross-domain datasets, which highlights the versatility of FDDiff. As shown in Figure <ref>, the qualitative results on 4× SR tasks for general images also demonstrate that FDDiff excels in generating rich high-frequency details, particularly for those images with distinct contours. Real-world Super-Resolution To further evaluate the performance of FDDiff on real-world 4x SR, we provide an additional quantitative comparison with DASR<cit.> and StableSR<cit.> on RealSR(v3) dataset. It is worth noting that StableSR is based on pre-trained Stable Diffusion involving additional large-scale training data which may not be in the same track with us. In Table <ref>, FDDiff has shown comparable PSNR and SSIM, while not perform well for perceptual metric MUSIQ, where further improvement need to be conducted in future work. Therefore, one of the limitations of FDDiff may lie in its perceptual quality in real-world super-resolution tasks. §.§ Ablation studies Effect of Frequency-Domain Refinement Process As shown in Table <ref>, we construct and evaluate four variants with different scale settings, target numbers, and targets of diffusion process. Similar to prior diffusion-based SR methods<cit.>, Model 1 adapts input layers and output layers of FDDiff to single scale, i.e., the same scale as input images, and only predicts the noise ϵ_t by removing frequency complement chain. Model 2 conducts multistage diffusion and change the target number of FDDiff to p+1, which means a subset {1, 4, 4^2, ⋯, 4^p} of j is chosen for the targets η_t. The SR quality of Model 2 is higher than Model 1, demonstrating the effectiveness of our proposed frequency complement chain and the corresponding multiscale frequency refinement diffusion model. Model 3 in the gray row, i.e., FDDiff, has achieved higher SR quality, owing to its fine-grained frequency partition. As for Model 4, it removes the diffusion process to only predict ϵ_t for frequency complement chain, and obtains lower PSNR and SSIM. This proves that the randomness guidance by Gaussian noise in diffusion process is indispensable for high-quality SR. Effectiveness of Timestep Assignment Schedule To design a monotonic function which is flat for higher t at lower index j and steep for lower t at higher index j, power function is an effective and brief solution. Trigonometric and logarithm function are also feasible with minor revision. The main advantage lies in the comparison with linear schedule, as shown in Table <ref>. Trade-off between Generation Speed and Quality Firstly, we conduct comparison on inference efficiency with diffusion-based methods, parameters and total inference time including data processing is evaluated in Table <ref>. Secondly, since FDDiff is equipped with timestep-skipping sampling procedure of DDIM for more efficient inference, we evaluate the corresponding impacts of sampling timesteps on SR quality. Being trained with T=1,000 total timesteps for SR with upscaling ratio 2^p=8 on FFHQ, different sampling timesteps for inference are selected, which should be no less than the number of x̃_j states, namely 4^p=64. The resulting trade-off between generation speed and quality is shown in Figure <ref>, which illustrate that FDDiff can achieve decent performance with only 64 timesteps and be chosen as FDDiff. We note that the SR quality of FDDiff reaches the best at 100 timesteps and tends to be stable over all timesteps, which may differ from prior diffusion-based SR models<cit.>. The reason may lay in the fact that our model is well-trained and the randomness of our diffusion process is limited by the additional intermediate targets, thus relatively not susceptible to the number of total sampling timesteps. § CONCLUSION In this paper, we present a Frequency Domain-guided multiscale Diffusion model (FDDiff), which is tailored to progressively complement the missing high-frequency information to low-resolution images and achieve higher-quality super-resolution. With the proposed frequency complement chain, FDDiff decompose the difficult process for generating all high-frequency details into finer-grained steps at multiple scales. We further design a multiscale frequency refinement network to handle multiscale frequency complement process within one unified network. Extensive experiments on general and facial images demonstrate the superior performance of FDDiff over prior generative models. In future work, we will develop the frequency-domain sampling schedule for faster inference. splncs04
http://arxiv.org/abs/2405.08746v1
20240514163250
Decomposing geographical and universal aspects of human mobility
[ "Louis Boucherie", "Benjamin F. Maier", "Sune Lehmann" ]
physics.soc-ph
[ "physics.soc-ph" ]
Distributed Threat Intelligence at the Edge Devices: A Large Language Model-Driven Approach Syed Mhamudul Hasan^1,2,3, Alaa M. Alotaibi^1, Sajedul Talukder^1,3, Abdur R. Shahid^1,2,3 ^1School of Computing, Southern Illinois University, Carbondale, IL, USA ^2Secure and Trustworthy Intelligent Systems (SHIELD) Lab ^3Center for Research and Education in AI and Cybersecurity (CARE-AI-C) syedmhamudul.hasan@siu.edu, alaa@siu.edu, sajedul.talukder@siu.edu, shahid@cs.siu.edu May 20, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================== Driven by access to large volumes of detailed movement data, the study of human mobility has grown rapidly over the past decade <cit.>. This body of work has argued that human mobility is scale-free <cit.>, it has proposed models to generate scale-free moving distance distributions <cit.>, and explained how the scale-free distribution arises from aggregating displacements across scales <cit.>. The field of human mobility, however, has not explicitly addressed how mobility is structured by the constraints set by geography – that mobility is shaped by the outlines of landmasses, lakes, and rivers; by the placement of buildings, roadways, and cities. Based on unique datasets capturing millions of moves between precise locations, we show how separating the effect of geography from mobility choices, reveals a universal power law spanning five orders of magnitude (10m to 1,000,000m). To do so we incorporate geography via the `pair distribution function', a fundamental quantity from condensed matter physics that encapsulates the structure of locations on which mobility occurs. We show that this distribution captures the constraints that geography places on human mobility across length scales. Our description conclusively addresses debates between distance-based and opportunity-based perspectives on human mobility: By showing how the spatial distribution of human settlements shapes human mobility, we provide a novel perspective that bridges the gap between these previously opposing ideas. § INTRODUCTION Human mobility is a fundamental component of societies, with constant flows of individuals and goods circulating through cities and landscapes. This flow of individuals moving from place to place underpins our social, economic, and cultural lives <cit.>. Therefore, understanding mobility patterns enables us to mitigate the spread of infectious diseases <cit.>, reduce pollution <cit.>, or organize urban planning <cit.>. Despite the fact that human movement is both shaped by and shaping our geography <cit.>, these two processes have not heretofore been considered together <cit.>. Geography trivially constrains human mobility. Consider two simple examples: Two houses cannot occupy the same physical space, thereby imposing a minimum distance for any movement between them. On a larger scale, residents of an isolated island of a 10km radius cannot travel distances greater than 20km: they would end up in the sea. Previous work has shown that human mobility is highly structured, with power law distributions of distance <cit.> and trip frequency <cit.>, inflows between administrative boundaries <cit.>, conservation laws <cit.>, and scales corresponding to structures of the built environment <cit.>. Simultaneously, the structure of the built environment is well understood, from the hierarchical organization of human settlements <cit.> at the scales of countries <cit.> to street-level networks <cit.>, as well as the fractal geometries of cities <cit.>, and how they developed <cit.>. A key concern, however, is that the work cited above arises from two literatures that have evolved separately. The recent surge of data-driven work on human mobility has not explicitly addressed how the observed mobility data is structured by the constraints set by geography <cit.>. In that sense, there is currently a disconnect between the large-scale data-driven human mobility research <cit.> and the community of geographers <cit.>. The absence of a comprehensive geographical framework in mobility studies has led to the emergence of two main theories for human movement <cit.>: The first theory considers physical distance to deter mobility <cit.>, a concept often modeled through gravity models <cit.>. The alternative viewpoint instead suggests that human movement does not depend on distance, but rather on the quantity of opportunities nearby <cit.>. There have been scattered efforts to incorporate a notion of geography in human mobility, but this work has been limited by incomplete data, e.g. relying on coarse-grained data such as data in grids <cit.>, administrative units <cit.>, or artificial geographical space <cit.>, obscuring smaller scales and reducing geography to discrete density maps <cit.>. The use of neural embeddings also achieves a similar effect <cit.>, and similarly leads to the emergence of gravity laws <cit.>. Therefore, the question of how geography shapes human mobility still lacks a comprehensive and satisfying answer. Below, we address this gap and present a new framework to understand geography and the distribution of locations (addresses, points of interest, stop-locations). We argue here that all one needs to know about geography – the fine structure, rivers and lakes, the country outlines – is captured in the locations where people stop. This set of locations represents a kind of effective geography. Our primary analysis is based on a previously unused dataset comprised of 36 years of registry data on 39M migration between addresses in Denmark, pinpointed with uncertainty of only 2m. This resolution far exceeds the state-of-the-art 50m accuracy typical of GPS data. Below, we extend the analysis to day-to-day mobility and point-of-interests. Analyzing these datasets using tools from statistical physics that are completely novel in the context of human mobility (e.g. the pair distribution function), and by considering locations as particles, our approach unveils mobility patterns at scales that were previously concealed. § MOBILITY TAKES PLACE ON THE PHYSICAL STRUCTURE OF THE BUILT ENVIRONMENT The concept of the pair distribution function. To characterize geography we study the pair distribution function (`pair distribution' below) between locations. The pair distribution is a powerful tool from statistical mechanics developed to understand the structural properties of materials <cit.>. Here, we argue that the pair distribution is able to capture the structural properties of locations in a given geography. To get a sense of this distribution, consider a hypothetical spatial arrangement of concentric circles of n locations (stop-locations, point of interest, addresses) on a disk of radius R (Fig. <ref>a). For an individual positioned at the center of these circles (a white ×), the potential moves are constrained by the position of other locations. For instance, in the absence of a location at distance R/2, a movement of that exact distance is impossible. To understand the potential movements from the disk's center, we compute the distribution of distances of every location to the center (Fig. <ref>b) which can be interpreted as what we would observe if an individual was to move many times from the center to one of the locations on the disk, choosing their new location independently of distance. We can place each of the n locations in the focus of such an analysis, finding a unique distance distribution for each of them, and calculate an average distance distribution. The result quantifies the expected number of locations found at distance r from an average location and is mathematically equivalent to the pair distribution, which simply encodes the number of location pairs found at distance r (Fig. <ref>c). A formal definition of the pair distribution function. Formalizing this notion, we consider points x_i∈Ω distributed in a d-dimensional space Ω⊆ℝ^d according to a density ϱ^d(x), then the pair distribution p(r) is uni-dimensional and determined by both the shape of the space as well as ϱ^d(x). Thus, the pair distribution is given by, p(r) = ∫_Ω d x ∫_Ω d y ϱ^d( x) ϱ^d( y) δ(r - ‖ x-y‖). with ‖ x-y‖ the distance between two points x and y. When we count flows between pairs of points of Ω, we observe a distance-dependent distribution, which we refer to as f(r) or the observed movement distance. The observed distribution f(r) can be represented as composite of the pair distribution p(r) and what we call the intrinsic distance cost π(r) (Methods: Pair distribution) <cit.>. f(r) ∝π(r)p(r). Observed data is a product of two separate components. This simple observation has a profound consequence: Any study of the observed movement distances f(r) – the entire study of human mobility – is focused on a quantity that includes the geometry of space Ω and the density ϱ^d(x), both reflected in the pair distribution p(r). We argue that, any `universal' behavior of the system, i.e. a geometry-independent law that encapsulates the behavior of the system, must be captured entirely in π(r), with the remainder depending on the specific configuration of locations. This intrinsic distance cost depends only on the distance between two locations. Thus, to uncover any geometry-independent behavior of our system, we must adjust our observation f(r) by the geometry encoded in p(r), i.e. π(r) ∝f(r)/p(r). To illustrate the impact of the intrinsic distance cost, π, we simulate movements on the concentric circles example (Fig. <ref>a,b,c) and demonstrate the effect of three different potential distance cost functions: π(r)=, π(r)=√(r) and π(r)=1/√(r). Note that for π(r)=, i.e. when the choice of moving is independent of distance, the distribution of moving distance is equal to the pair distribution (Fig. <ref>d). Illustrating scale via three Denmarks. To illustrate how geography shapes the pair distribution and the observed moving distances, we consider the mobility trace simulation for three different geographies (Fig.<ref>e,f,g). The first geography, 'Real Denmark,' uses the actual geography of Denmark, including 3.3M precise locations (addresses) (Fig. <ref>e). The second, 'Disk Denmark,' maintains the microstructure of cities but alters city positions and landmass shapes, distributing real city centers uniformly on a disk (Fig. <ref>f). The third, 'Uniform Denmark,' keeps the macro structure of landmasses and city positions from 'Real Denmark' but features uniformly dense cities (Fig. <ref>g). The distribution of pair distribution effectively encapsulates the nuances of each geographical layout (Fig. <ref>h). On the local scales (less than 1km), the pair distribution of the `Real Denmark` geography aligns with the one of `Disk Denmark`, reflecting city layouts. However, at broader scales (more than 10km), the pair distribution of 'Real Denmark' matches 'Uniform Denmark', indicating the influence of the shape of land masses and city positions. To generate mobility traces we impose the same intrinsic distance cost on each geography, and make the ansatz that it follows π(r)=1/r Remarkably, despite having the same intrinsic distance cost, the resulting observed mobility traces are distinct across the three Denmarks (Fig.,<ref>i). This outcome is significant as it shows that even under a similar movement law, the variations in geography alone can lead to distinct mobility patterns. Next, we focus on the geographic regularities manifested through the power laws and the fluctuations in the pair distribution of real Denmark (Fig.<ref> h). § MULTI-SCALE URBAN STRUCTURE SHAPES THE DISTRIBUTION OF PAIR DISTRIBUTION The condensed matter physics of locations. Having illustrated the fundamental way geography is encoded in a country's location pair distribution and how it affects the dyadic process of human mobility, we now focus on a deeper analysis of its shape and origin. Specifically, we show how its properties naturally emerge from statistical physics arguments, and how understanding the pair distribution allows us to identify the key attributes of geography in terms of shaping human mobility. Thus, in this section, we go beyond simply appropriating the concept of the pair distribution function for analyses of human mobility, but use the tools from condensed matter physics <cit.> to create simple models for the micro-, meso-, and macro-structure of the geography of an entire country. Regimes of the pair distribution. The pair distribution between residential locations (buildings and addresses) in Denmark shows the behavior displayed in a. In this view buildings are simply addresses that stack on top of each other (e.g. apartment buildings). On the micro-scale (I) within distances of r≲25m, the `neighborhood', we observe a linear onset of neighbor density, oscillatorily modulated. For larger distances at the meso-scale (II), this growth assumes a scaling of approximately p(r)∝ r^0.67 between 25m and 200m (until 4000m for addresses). Afterward, on the macro-scale (III), this growth curbs, decays super-exponentially, and eventually approaches zero as we reach the limit of the finite system. Regime I: The density of cities as an ideal gas in a potential. Our basis for modeling cities is a simple particle model in two dimensions – where buildings are modeled as particles distributed uniformly at random, corresponding to an ideal gas kept in place by an external potential. The key constraint to identify the properties of this 2D confining potential (b) is the observation from the literature that population density per unit area decays exponentially with distance to its center (c) <cit.>. A generalized Gamma distribution of shape p(r) = (r/R^2)exp[-(r/R)^m]/Γ(2/m) accurately describes the pair distribution of this ensemble, where variable R allows us to associate every city with a radius that can be estimated from the data via a maximum-likelihood fit, without having to rely on a definition of city center (c and SI: City Sizes). Regime I: Forces between locations. To explain the oscillatory behavior modulating the onset of the pair distribution, we take the condensed matter approach a step further. We argue that there is both a repulsive and attractive effective force between locations. The repulsive force originates from the simple fact that two buildings cannot occupy the same space. The effective attractive force results from the lower cost of placing buildings near one another as it reduces infrastructural cost for shared amenities <cit.>. In implementing this idea, we recognize that – unlike gases – cities are not constructed according to strict laws, so there is not a single, `true' description of the system. Thus, we illustrate the influence of such effective forces on the pair distribution by modeling the system as two distinct ensembles of repulsive-attractive forces: (i) an ensemble of Lennard-Jones (LJ) disks, a canonical model for sphere-like particles <cit.>, and (ii) attractive hard disks of heterogeneous size, both in the presence of a linear external potential (Methods: Locations as interacting particles). Both models reproduce the oscillatory features observed at the micro scale in our high-resolution dataset (d). We can study the oscillatory structure in terms of the pair-correlation function g(r)=p(r)/i(r), where i(r) is the pair distribution of an ideal gas. This function measures the over- and under-representation, respectively, of neighbors at distance r in relation to the expectation if no interaction forces were present <cit.>. In both models and the data, there is a considerable lack of neighboring buildings for distances r ≲ 4 m, suggesting presence of repulsive forces, later reaching a peak g(r)>1 at r≈ 6 m which is typically observed for systems with attractive forces. This peak becomes wider when we consider heterogeneous building radii. The discrepancy between the data and model(s) below 5m (distance between black and teal/pink lines in d) is due to the definition of building location in the data as the location of its front door, rather than the building center (Methods: Data). Regime III: The spatial distribution of cities. We now use the condensed matter tool-set to understand the macro-scale (III) of the pair distribution. For large distances, its shape is dominated by the locations of cities. We compare an ideal gas of cities (i.e. random positions uniformly distributed in the landmass of Denmark) to the empirical pair distribution of all Danish towns (e). While the ideal-gas distribution has a linear onset, there is a considerable lack of neighboring towns for small distances r≲ 1km, but after that, the observed town pair distribution rapidly approaches and matches the ideal-gas distribution remarkably well, demonstrated by the fast approach of g(r)→1. The fact that there is not a clear peak in g(r) suggests the absence of a strong attractive force. When we compare the observed pair distribution to that of a non-attractive hard disks-model (Methods: City pair distribution) we observe a similar behavior, indicating that the positions of towns are almost statistically indistinguishable from random locations except for the condition that towns may not overlap in territory. To obtain the shape of the pair distribution for buildings, next we weigh every contributing pair distance for a pair of towns by the product of their respective numbers of buildings (e) This shifts the ideal gas pair distribution to larger distances and introduces a peak in g(r) at around r=10km. While the shifted onset is replicated in a building pair-weighted hard disk model, we do not recover a peak. This behavior, however, can be explained by means of central place theory <cit.>, which states that towns that are located near large cities tend to be substantially smaller, hence introducing an effective attractive force between types of cities (Methods: Non-overlapping disks) Regime II: Fractal structures explain the meso-scale. As described above, the initial linear growth of the pair distribution quickly approaches a sub-linear scaling law of α∼ 0.67 (a). We attribute the emergence of this scaling law to the non-regular shapes of cities composed of smaller `patches' of buildings that form larger clusters in a space where occupation is limited by local geography (e.g. bodies of water, hills) or for developmental reasons (e.g. industrial areas, parks, agricultural use). Specifically, we adapt a model that was recently used to generate surrogate city positions <cit.>. Assume that a city is constructed as a hierarchically nested structure of patches containing b buildings with packing fraction θ (f.i, Methods: Fractal model). For a range of realistic parameters, this model yields α=0.67 for both single realizations (f.ii) and analytically (SI. <ref>). Our model replicates the behavior observed for the whole country and in agreement with α=0.69±0.04 for individual cities. Furthermore, this nested description is consistent with earlier observations that settlement size distributions are scale-free <cit.>. By modeling the built environment as a multiverse of independent patches, we can infer the patches' area distribution from the distributions of radii using the per-city pair distributions (f.iii and Methods: Patch Model), we obtain α=0.71 as well as a patch area distribution of f(A)∝ 1/A^2.15, a result that matches the exponents reported in <cit.>. The meso-scale scaling law is robust as all these similar values of α arising from different models are consistent with our empirical finding (Fig.<ref>a-Fig.<ref>) and the literature <cit.>. § INTRINSIC DISTANCE COST FUNCTION A power law that spans five orders of magnitude. Having established a fundamental understanding of the physics of the pair distribution, we now demonstrate the impact of taking into account the information encoded within it, by analyzing an exhaustive dataset—39M residential moves between 3.3M Danish addresses over 36 years. The dataset presents reduced bias as it covers all residential moves (Methods: Data). To uncover the intrinsic distance cost of residential mobility, we first eliminate the geographical component by normalizing the observed moving distance distribution by the pair distribution (Fig. <ref>a). This normalization reveals that the intrinsic distance cost function (Eq. (<ref>)) follows a power law remarkably well: π(r)=f(r)/p(r)=1/r^s. This power law is consistent across scales ranging from 10m to 500km, covering five orders of magnitude (Fig. <ref>b). A maximum likelihood fit provides an exponent of s = 0.98, which validates the ansatz of Eq. (<ref>); the intrinsic distance cost follows π(r)=1/r. We consider this power law to be an extension of the gravity model to the continuous domain. A `geography-free' gravity law. In the gravity law for human migration, the migration probability between cities is proportional to the product of their population and inversely proportional to their distance. This model is inherently discrete as it requires administrative boundaries to define the population size. However, the concept of pair distribution closely aligns with the idea behind the gravity law. For example, consider two cities with different populations and radii (Fig. <ref>c). When the distance between the cities far exceeds their radii, the pair distribution tends to a Dirac distribution of height equal to the product of the number of addresses in each city (Fig. <ref>d-e). Assuming that the number of addresses is proportional to the population, the 'mass-product' term of the gravity model emerges and Eq. (<ref>) coincides with the gravity model (Methods: A ‘geography-free’ gravity law). Our framework is continuous as it operates at the granularity of individual addresses, the finest possible scale. Piece-wise local power laws. Considering the intrinsic distance cost starting from individual Danish cities (as opposed to globally), we observe a piece-wise process (SI Fig. <ref>-<ref>) <cit.>. Specifically, for each city, we consider the distribution of moves originating in the city of interest to the rest of the country. The pair distribution is limited to pairs that include at least one address in the city of interest. At the city scale, we find that the intrinsic distance cost is a piece-wise power law (Fig. <ref>f-g) with the first exponent centered around 0.60± 0.20(SD) and the second exponent around 2.0 ± 0.21(SD) (Fig. <ref>h). The process is universal in the sense that the exponents of the piece-wise power laws are consistent across the 1400 cities of Denmark. The transition point between the two power laws defines a mobility city radius (Fig. <ref>i) with a distribution that lies between a log-normal and a power-Pareto distribution (Methods: Power-law estimation). Putting the pieces together. So how does the global power-law (Fig. <ref>a) relate to the local piece-wise picture (Fig. <ref>f-g) starting in each city? Here, our statistical physics based understanding of the components of the pair distribution can be used to complete the picture of human mobility, explaining the intrinsic distance cost starting from the city-level description. We simulate the piece-wise power law over a toy geography that reproduces the key characteristics of geography: scale of cities (Fig. <ref>.i), local pair distribution that follows a generalized gamma distribution (Eq. (<ref>), c), and random position of cites (e.III), we recover the empirical intrinsic distance cost, π(r)=1/r, validating our geographical model and the piece-wise power law. Generalizing to other geographies and types of mobility <cit.>. Finally. we show that our results are not particular to either the geography of Denmark or residential mobility. Figure <ref>a illustrates that the normalization by the pair distribution unveils a power law for residential mobility in France. We also find the same patterns for to day-to-day mobility across the diverse geographies of Houston, Singapore, and San Francisco (Fig. <ref>b-d). The fact that our results generalize highlights the general nature of the power law distribution of intrinsic mobility cost. § DISCUSSION Geography trivially constrains human mobility. While the existing human mobility literature has extensively documented the structure of human mobility, it often remains disconnected from studies focusing on the organization of the built environment. This led to the emergence of two distinct viewpoints on human mobility <cit.>: one based on distance, similar to the gravity model <cit.>, and another focused on the availability of opportunities <cit.>. Here, we have proposed the pair distribution of locations as a key normalization variable of human mobility. The normalization reveals an intrinsic power law consistent across various geographies and mobility types. Additionally, we proposed a model inspired by statistical physics to account for the shape of the pair distribution between locations. Our model replicates the characteristics of the pair distribution at the scales of dwelling and country. While our analysis primarily focuses on addresses, further work could study the pair distribution between point-of-interests or between pairs of residential and workplace locations. Such an extension could help us understand commuting patterns, a significant component of day-to-day mobility <cit.>. Finally, we explained our normalization as a continuous gravity model that reconciles the two paradigms of human mobility. We also show that this description does not hold at the city scale and emerges from the aggregation of scales <cit.>. At the city level, other demographic factors could explain relocation decisions, such as housing prices, employment opportunities, and family evolution <cit.>. Our framework enhances the understanding of human mobility and provides a bridge connecting two rich bodies of literature. § METHODS §.§ Data §.§.§ Uncertainty Quantification Estimates reported in the main text are reported as mean ± standard deviation. §.§.§ Residential Mobility in Denmark The data on residential mobility within Denmark is derived from the Befolkning database of Danmarks Statistik <cit.>. This database contains 39,297,646 residential moves between different addresses in Denmark from 1986 to 2020. The location data for the 2,345,453 buildings that make up the 3,251,464 addresses are also provided by Danmarks Statistik, with an accuracy better than ±2 meters <cit.>. The location of an address is defined as the location of the building's entrance door, we investigate the effect of this definition of pair distribution function in SI. <ref>. §.§.§ Bias in residential mobility in Denmark In contrast to studies using digital tools such as mobile phones, which may introduce biases <cit.>, the Danish residential mobility dataset we study is inherently representative of the entire population. In fact, the dataset includes every residential move, a requirement for all residents to report, thus eliminating potential biases associated with selective data collection methods <cit.>. Demographic information (gender and year of birth) are reported in Fig. <ref>. Cities are defined according to the definition of Danmarks Statistik; to ensure the robustness of the results, we compare their definition with a hierarchical clustering of the addresses (HDBSCAN <cit.> in SI. <ref>) §.§.§ Residential mobility in France The location data for France is compiled from the Base Adresse Nationale, accessible via <https://adresse.data.gouv.fr/donnees-nationales>, which contains precise location data for 21,567,447 buildings. To match the housing unit data, we utilized information from <https://www.data.gouv.fr> (FiLoSoFi), which segments the country into a grid of 200m squares, each annotated with the count of housing units and collective housing entities. We augment the data by uniformly distributing the housing unit count amongst each collective housing entity, resulting in 34,041,910 individual addresses. The residential mobility data was obtained from <https://www.insee.fr/fr/statistiques>. It contains 40,465,288 inter-city migrations from 2016 to 2020. The distances are calculated from one city center to another. The city centers are computed as the centroid of contained addresses. As we only have access to inter-city moves, migration distances that are less than the largest city's dimensions in France were disregarded. §.§.§ Day-to-Day Mobility The data on daily mobility was obtained from user check-ins on the location-centric social network Foursquare over six-months (May 27, 2010 to November 3, 2010) as documented <cit.>. The data comprises a total of 239,788 movements across 43,395 distinct locations. The regional breakdown of the data is, 47,996 movements occurred across 11,808 locations in Houston, 79,624 movements across 15,617 locations in Singapore, and 112,168 movements across 15,970 locations in San Francisco. §.§.§ GeoBoundaries We obtain the country, state, and municipality shapes from the geoboundaries project <cit.>. Throughout the paper, we work with the EPSG:23032 projection for all points that lie within Denmark and the EPSG:27561 projection for points that lie within France. We then perform analyses on the Euclidean geometry of the projection. §.§ Pair distribution function This paragraph provides further details on how the pair distribution function emerges from studying the dyadic process of human mobility between locations. The expected number of movements from location x to location y is given by, f(x,y)d^2xd^2y with f(x, y) being a continuous number density. This number is influenced by two functions. First, the number of observations is proportional to the number of locations at x as well as the number of locations at y, i.e., depending on the product of location density ϱ(x)ϱ(y) dx dy. Second, the number of movements is proportional to the movement propensity, i.e., the tendency for a move that begins at a location at x to end at a location at y, which we denote as π(x, y), f(x,y)d^2xd^2y = π(x,y) ϱ^d(x) ϱ^d(y) d^2xd^2y. The number of moves spanning distance r within the domain Ω is f(r) = ∫_Ωπ(x,y) ϱ^d(x) ϱ^d(y)δ(r - ‖ x-y‖) dx dy. If we make the anstaz that the propensity to travel from x to y solely depends on the distance between the two, i.e, π(x,y) = π(‖ x-y‖))=π(r) as π(r) is now independent of location x and y, it can be factored out of the integral, we obtain f(r) ∝π(r)p(r) with p(r) the pair distribution function as in Eq. (<ref>), and we recover the main text's Eq. (<ref>) (see SI. <ref> for more details). The ansatz of Eq. (<ref>) is the key point to reconcile the distance-based and opportunity-based perspective on human mobility. The distance term is  π(‖ x-y‖), the opportunity term is the expected number of pairs of locations between location x and location y, ϱ^d(x) ϱ^d(y) d^2xd^2y (SI. <ref>, SI. <ref>) <cit.>. Moreover, the pair distribution function characterizes the geometry of the space as it represents the joint probability of finding two locations at particular positions in the system, (SI. <ref> and SI. <ref>). The computation of the pair distribution function scales as 𝒪(n^2), we use a k-d tree to compute it for large number of locations (see SI. <ref>) §.§ Locations as interacting particles Our basis for modeling cities is a simple particle model in two dimensions—where buildings are modeled as particles uniformly distributed at random, corresponding to an ideal gas kept in place by an external potential. Consider the city center, e.g., the central business district, as the primary point of interest due to its proximity to amenities. This attractiveness implies a higher cost of being located further away from the center, represented by the distance | x|. At the same time, two simple mechanisms will prevent buildings from accumulating at the exact center of the city. First, buildings have a certain average radius z, so they cannot reside too close (at distance <2z). There is an advantage to buildings not being too far apart, as they can share local amenities. Rather than explicitly modeling this phenomenon, we assume that there is an inherent temperature, T, in the system. This temperature determines the distribution of house locations, which is influenced by interactions and external potential. From a statistical physics perspective, we describe the system with a simple external potential that linearly increases with distance. V^ext( x) = γ| x| where we assume that the origin of the coordinate system is located at the city center. In order to model repulsion and attraction between houses, we further assume a Lennard-Jones interaction potential V^int( x_i, x_j) = ε[ (2z/| x_j - x_i|)^12 - 2(2z/| x_j - x_i|)^6 ], which is commonly used to model intermolecular attraction and repulsion <cit.>. In total, a system with these properties evolves according to the Hamiltonian ℋ({ x, p}) = 1/2∑_i=1^N p_i^2 + γ∑_i=0^N | x_i| + 1/2∑_i=1^N∑_j≠ i^N V^int( x_i, x_j) with two-dimensional momenta p_i and locations x_i. In a canonical-ensemble formulation of the system, i.e. at constant inverse temperature β, the probability of finding a configuration { x, p} in volume-element d^2 x d^2 p, is given by ϱ[{ x, p}] d^2 x d^2 p = exp[-βℋ({ x, p})] d^2 x d^2 p. We will initially consider an ideal gas first where V^int=0, which will enable us to say something about the density of particles around the center of city, i.e. we want to find the particle dwelling probability p(r)dr with r being the particle's distance to the center. For the sake of simplicity, we set N=1 without loss of generality. This is justified by the ergodicity of the system, which implies that the trajectory of a single particle will eventually follow the density of the entire distribution. This can be expressed as taking the Nth root of the N-particle density. Integrating over the momenta yields ϱ[ x] d^2 x = exp(-βγ | x|) d^2 x With a change of variables to polar coordinates, we find ϱ[r, ϕ] r dr dϕ = r exp[-βγ r] dr dϕ, i.e. p(r) dr = r exp[-βγ r] dr. This implies that in the absence of interactions, the distribution of houses around the city center should follow a k=2 Erlang distribution with a scale parameter of λ=βγ=1/R_0 where R_0 is half the city radius. The role of the inverse temperature term, β, is explained in SI. <ref>. To gain insight into the radial particle density in the context of strong repulsion, we return to the Lennard-Jones perspective. In the limit of ε/T≫1, particles will have a strong tendency to be found in their respective potential minimum, i.e. at distance 2z from each other. Effectively, we can think of them as hard disks of radius z that have a tendency to form clusters. If the temperature is low, the effective radius of the city will be small, we expect a crystal to form in the center (see SI. <ref>). §.§.§ Molecular Dynamics simulation We are interested in finding configurations 𝒞 that accurately depict the canonical ensemble with number of particles N, an average-constant temperature T, and a constant but irrelevant volume. We assume that the particles are constrained to a radially symmetric external potential, with the total volume of the system containing the particles being irrelevant. Finally, the total potential must be V = ∑_i=1^N [ V^ext( x_i) + 1/2∑_j≠ iV^int( x_i, x_j)]. To this end, we integrate the equations determined by the system's Hamiltonian numerically using the velocity-Verlet algorithm <cit.>. Furthermore, we rescale particle velocities according to the stochastic Berendsen thermostat with relaxation time τ <cit.>. The details and parameters of the molecular dynamics simulation are available in SI. <ref>. §.§.§ Hard disks We generate a configuration of hard disks with a non-specified attractive force (i.e. we do not explicitly integrate the equations of motion for a hard-sphere interaction potential with an additional attractive force). To do so, we first generate N=10^4 random positions according to a radial Erlang distribution of scale parameter R=675 (i.e. an ideal gas). Subsequently, we draw a random radius for each position from a heterogeneous distribution with power law tail. We first draw values ẑ_i, p(ẑ) = a^2-1/2a×ẑ^a, ẑ≤ 1, 1̂/z^a, ẑ > 1, and then assign disk radius z_i = 3 ẑ_i/<ẑ>. We choose a=4 to obtain a heterogeneous distribution with non-finite variance in disk area. Afterwards, we run the collision algorithm outlined in SI. <ref>. The modelling of interaction between the house in the external potential is further developed in the SI. <ref>. §.§ Non-overlapping disks To emulate the position of cities, we randomly distribute N hard disks of radii R_i in shape Ω. We start with largest disk of radius R_max and iterate over all disks, ordered decreasingly in size. For every disk j, we generate a random position x ∈Ω until the condition | x - x_i| > R_j+R_i, ∀ j<i is met, i.e. drawing new random positions until there are no overlaps with other, already placed disks. Then, assign x_j← x and continue with the next disk. §.§ City pair distribution and radius The pair distribution of each city is best represented by a generalized Gamma distribution with linear onset p(r,R,m) = r/R^2 Γ(2/m)exp[-(r/R)^m], as depicted in  c. Here, r is the distance between two locations, R is the scale parameter corresponding to city-radius and m>0 is a shape parameter controlling the decay of the tail. Assuming that population density decays exponentially with distance to the city center and that consequently the pair distribution of buildings assumes a generalized pair distribution of the form of Eq. (<ref>), we perform maximum-likelihood fits to infer parameters R and m for each city in Denmark with more than 30 buildings, we generally find strong correspondence to the empirical distribution, as illustrated by the correlation of r/R and the inverse of Eq. (<ref>), (r/R)^*=-(log[p(r)R^2Γ(2/m)/r])^1/m, albeit with small deviations at smaller distances (cf. c) that can be associated with the sub-linear scaling behavior observed on the meso-scale of a. Remarkably, measuring the `radius' R of a city in terms of its pair distribution's scaling parameter comes with the advantage of not having to rely on any definition of `city center'. Moreover, no assumptions about urban growth are necessary; instead, the observed behavior emerges from the simple ansatz that being placed at distance | x| from the center of a town comes at a cost ∝ | x|, which can be overcome for a multitude of reasons encoded in temperature T. §.§.§ Fractal Model The fractal model we adopt was originally devised to explain the positions of galaxies and was recently used to generate surrogate city positions <cit.>. We assume that a city is constructed as a hierarchically nested structure of patches containing buildings (cf. f.i and SI. <ref>). At layer k = 1, a patch of radius R_1 contains b buildings of radius z_1 placed in a non-overlapping manner with packing fraction θ. At layer k=2, a patch of radius R_2 contains b building-patches of radius z_2 = R_1, again placed without overlap and packing fraction θ=θ_k=bz_k^2/R_k^2. Continue in that fashion until reaching layer k=L. Due to the constant packing fraction θ < 1, larger areas of absent buildings will form at each hierarchy layer, mirroring similar observations in real urban structures. Note that demanding θ to be constant across hierarchy layers also implies a predetermined patch radius of R_k=(θ/b)^k/2z_1. A sample configuration with b=4, z_1=4m and θ=0.8 can be seen in f.ii. To evaluate the overall pair distribution of such a fractal structure, we assume that for a patch of hierarchy layer k, the b^k-1 buildings that are contained in each of its b sub-patches are all located at their respective centers. Then, the pair distance number density (pdnd) of an average such patch will approximately assume Eq. (<ref>), weighted with a total of b^2k-2b(b-1)/2 building pairs (between subpatches). Moreover, at every hierarchy layer k there will be b^L-k such patches. In total, each hierarchy layer therefore contributes a pdnd of n_k(r) ∝ b^k p(r, R_k, m), implying that the observed pdnd of the whole structure is proportional to n(r) = ∑_k=1^L n_k(r) ∝∑_k=1^L b^2k r/θ^k z_1^2exp[-(r/z_1)^m(b/θ)^mk/2]. In the limit of L→∞, we use Laplace's method to find log n(r) ∝(1-2logθ/log(θ/b))_=αlog r, i.e. a sub-linear scaling law emerges from considering the pair-weighted sum of single-scale pair distributions with linear onset. The pair distance number density n(r)∝ r^α with α = 1-2logθ/log(θ/b). Notably, the result is independent of the parameter m that controls the tail of the single-scale pair distribution, suggesting that the exact shape of the composite pair distributions are irrelevant, which we further demonstrate in the SI. <ref> by discussing another functional form of the pair distribution. §.§.§ Patch Model We consider a multiverse consisting of an infinite amount of universes indexed i, each of which is inhabited by a patch of size R_i with N_i buildings inside, contributing to the multiverse with a pair distribution number density of n(r,R_i) ≈ N_i^2 p(r,R_i,m) (each universe contributes a number of building pairs amounting to N_i^2). We also assume that there's a constant population density ρ_0∝ N/R^2, demanding that N∝ R^2. For each of these universes (or rather, each of these patches), we calculate the number of building pairs at distance r and assume that patch sizes R are distributed according to some—at this time still unknown—distribution f(R). Then, the joint distribution of house pairs at distance r is given by n(r) ∝∫_0^∞ dR n(r,R) f(R) ∝∫_0^∞ dR f(R) R^4 r/R^2exp(-(r/R)^m) . To simplify the integral, we change variables to β = 1/R, such that dR = -β^2dβ and n(r) ∝∫_0^∞ dβ f(β^-1) r/β^4exp(-(β r)^m). This integral yields a solution if f=β^4-α with α < 1. Let's assume that this is the case, which means we have to solve the integral n(r) = C ∫_0^∞β^-α r exp(-(rβ)^m) dβ where C is a normalization constant. We recognize the integral of the gamma function (see SI. <ref>) and therefore find, n(r) ∝ r^α C Γ(1-α/m). That is, if the radius of patches in the multiverse would be distributed as f(R) ∝ R^-(4-α), the initial growth of the joint pair distribution would follow a sub-linear power law. Notably, the scaling exponent does not depend on the tail parameter m. Let's see what this would mean in terms of the area of the patches. The area would scale as A∝ R^2 (or R∝√(A)), i.e. the distribution of the area would follow f̃(A) = dR/d A f(R=√(A)) ∝1/A^(5-α)/2. In the data, we observe α=0.67. That would mean that the area of the patches would have to be distributed according to a power law with exponent μ=-(5-α)/2≈ -2.17, which is well within what has been found empirically for cities <cit.>. Furthermore, we find f(R)∝1/R^3.29 from our city radius inference analysis (cf. Fig. <ref>) which leads to μ=-2.15, showing that the results of these two separate analyses are consistent. §.§ A `geography-free' gravity law In its original form, the gravity law for human migration states that the probability of moving between two cities is proportional to the product of their population and inversely proportional to the distance separating them <cit.>, P_1 → 2∝N_1 N_2/d_1 → 2. This relationship requires administrative units to define cities and their respective populations, thereby rendering it inherently discrete and subject to arbitrary choices regarding administrative boundaries. Yet, constructing a product of origin×destination populations closely aligns with the idea of normalizing by the pair distribution. To illustrate, consider two cities of populations N_1, N_2, with radii r_1 and r_2 separated by a distance d. If we assume a uniform location density within these cities (as illustrated in Fig.<ref>c), the density of locations is given by p^d( x) =N_1/π r_1^2δ_Ω_1(x) + N_2/π r_2^2δ_Ω_2(x), with Ω_j the domain of city j. This leads to the pair distribution (using Eq. <ref>) : p(r) = N_1 N_2 ∫_Ω∫_Ω d x d y 1/π r_1^21/π r_2^2δ_Ω_i(x)δ_Ω_j(y)δ(r-R( x, y)). In the limit case where d significantly exceeds both r_1 and r_2, when the city radii are negligible compared to the distance between cities, the pair distribution around d tends towards a Dirac distribution of height equal to the product of the number of addresses in each city, p(r) N_1 N_2 δ_d(r), therefore (Fig.<ref>d,e), f(r) = p(r)π(r) N_1 N_2 δ_d(r)/r. Equation <ref> illustrates how in the context of the the gravity model for migrations, the `mass-product' N_iN_j emerges. Given that the number of addresses in a city is proportional to its population, we recover the discrete gravity law. We refer to our finding as `continuous' since it operates at the granularity of individual addresses, the finest possible scale (see SI. <ref>). A comparison between the gravity model, the radiation model, and the pair distribution framework is available on figure <ref>. We can also interpret the intrinsic distance cost pi(r) = 1/r as an emergent property of random utility theory, where individuals appreciate distance with a logarithmic scale. This interpretation is consistent with the logarithmic utility function described in SI. <ref>. §.§ Power-Law Estimation The parameters for power-law distributions are determined using maximum-likelihood estimation, as outlined in <cit.>. For continuous distributions, the probability density function (pdf) for a power-law distribution is given by p(x) = C x^-α i where x>x_min>0 and C is a normalization constant. The parameter α is typically greater than 1, as the probability density function described in equation <ref> does not integrate over x ∈ [x_min,+∞ ) for α≤1. However, in our analysis, the intrinsic distance cost has an exponent α∼1. Moreover, the maximum-likelihood estimation method from <cit.> is known to be biased when α < 1.5, a limitation that has been confirmed by <cit.>. Consequently, for estimating power laws with α < 1.5, we employ the maximun-likelihood estimator developed in <cit.>, with a maximum boundary for the power law, so that the pdf in equation <ref> is integrabled over x ∈ [x_min,x_max] for any α>0. It is reasonable to assume that power-law distributions are bounded within the context of this study, given that geographical limitations inherently bound the movement distances in our data set. We define x_max as the diameter of each domain Ω, which corresponds to the largest distance between two points (Table.,<ref>). We assess the power-law parameter estimates through goodness-of-fit tests employing the Kolmogorov–Smirnov (KS) statistic and likelihood ratios. The Kolmogorov–Smirnov test, presented with p-values in Table. <ref>. Likelihood ratios provide evidence on whether the power-law distribution is a superior fit compared to other distributions. The methodology for estimating piece-wise power laws for each Danish city (Fig. <ref>f-g) is based on a maximum likelihood estimator derived in <cit.>, the details of the fit are available SI. <ref> and SI. <ref>. In the analysis of the Mobility City Radius the power Pareto distribution <cit.> is identified as the optimal model with a slightly better fit, measured by an Akaike Information Criterion (AIC) score of 24046.03. This is compared to the log-normal distribution, which has an AIC of 24161.17. A comparison with additional distributions is available in Table <ref>. The mesoscale power law of the pair distribution function (a) and the simulation of the fractal and patch model (fv) are also estimated with a maximum likelihood estimator. § FIGURES The table presents information about the estimation of the exponent of the power law. The estimation of the power law is based on a maximum-likelihood estimator as in <cit.>. The values of x_min and x_max are reported in meters. The table reports also the p-values associated with the Kolmogorov-Smirnov test for the power law exponent estimators <cit.>. According to <cit.>, the suitability of a power law model is considered statistically implausible if the p-value is less than or equal to 0.1. This threshold indicates a less than or equal to 10% probability that the observed deviation of the data from the model predictions is due to random variation alone. Therefore the power-law model is statistically significant if p >0.1 (in bold). The table presents the log-likelihood ratio ℛ (ref. <cit.>) comparing the power law (cf. Table <ref>) to other heavy-tailed distributions (one per column). When ℛ is positive, the power law distribution has a higher likelihood compared with the alternative. When ℛ is negative, the other distribution has a higher likelihood compared with the power law. The table reports also the p-values associated with ℛ (^*P ≤ 0.05, ^**P ≤ 0.01, ^***P ≤ 0.001). §.§.§ Data availability The data that support the findings of this study are available on the Zenodo repository: However, restrictions apply to the availability of these data on in some cases only the anonymized and aggregated data is publicly available. Source data are provided with this paper, see code repository. §.§.§ Code availability Code is available at https://github.com/LCB0B/role-of-geo/. The repository contains the geography model, the statistical estimation codes, the source data, the code for the figures, and additional figures. §.§.§ Acknowledgement The authors thank J. Dzubiella and YY. Ahn for helpful comments in the early development of this study, as well as L. Alessandretti and S. De Sojo Caso for providing insightful comments on the manuscript. The work was supported in part by the Villum Foundation and the Danish Council for Independent Research. §.§.§ Author contributions L.B., B.M. and S.L. designed the study and the model. L.B. and B.M. performed the analyses and implemented the model. L.B., B.M. and S.L. analyzed the results and wrote the paper. § SUPPLEMENTAL INFORMATION § THE CONCEPT OF PAIR DISTRIBUTION FUNCTION DISTRIBUTION §.§ Pair distribution examples This section presents additional examples of toy geographies that vary both the shape of the landmasses and the density distribution of points. These examples intend to demonstrate that the pair distribution distribution can encode the geography of the built environment. Figure <ref>a, illustrates how the pair distribution function captures the shape of uniformly distributed points. Figure <ref>b illustrates the impact of spatial point distribution on a fixed shape. §.§ The pair distribution function emerges as a normalization for human mobility Whenever points x_i∈Ω are distributed in a d-dimensional subspace Ω⊆ℝ^d according to some density p^d( x), their pair distribution function p(r) is one-dimensional and determined by both the geometry of the space as well as p^d( x), given that there exists a (pseudo-) metric r≡ R( x, y) that determines a distance between two points x and y. This pair distribution is given by p(r) = ∫_Ω d x ∫_Ω d y p^d( x) p^d( y) δ(r - R( x, y)). When observations are made between two distinct points (i.e. when a quantity is counted) in the domain of interest, a distance-dependent distribution of the observed entities is observed. This distribution is hereafter referred to as the function f(r). These observations can be quantified as the number of packages sent from location A to location B at distance r, the number of people moving from A to B at distance r, or the number of power line connections that connect substations at distance r in a power grid. The observed distribution function f(r) is the result of two factors: the number of pairs of points that exist in the domain of interest, Ω, with associated point density p^d(x), and the one-dimensional probability π(r) with which pairs of distance r are manifested in the real world to be observed. In short, the number f(r) of observations of distance r, is determined by the number of possible pairs p(r) at distance r and the probability π(r) that they would exist at this distance, i.e. f(r) ∝π(r)p(r), bar a normalization constant. When we measure f(r), we always measure with it the geometry of the subspace Ω as well as the density p^d( x), manifested in the pair distribution function p(r). The “universal”, i.e. geometry-independent law that encapsulates the behavior of the system we want to study, is encoded in π(r). To properly deduce this geometry-independent behavior of our system, we need to adjust our observation f(r) by the geometry encoded in p(r), i.e. π(r) ∝f(r)/p(r). A pertinent question is which reference topology (geometry and point distribution) would result in an observation f that is directly proportional to the behavior π. This question holds practical significance, as considering behavior benefits greatly from a conceptual framework for the distribution of points within a space. Indeed, our goal is to understand the mechanisms that link two points within this space. Without defining these points or the space itself, this endeavor becomes somewhat pointless. In our formalism, this implies that we are seeking a topology where, f(r) ∝π(r), which implies p(r) = const. Looking at Eq. (<ref>), we have to find Ω and p^d( x) such that p(r)=const. While there might be a multitude of solutions, the simplest one is a one-dimensional ring, which can be conceptualized as a box of length L and domain x∈[0,L) with periodic boundary conditions, i.e. an associated distance R(x,y)= |x-y|, 0≤ |x-y| ≤ L/2, L-|x-y| L/2 ≤ |x-y| < L and uniformly distributed points, i.e. p^d=1(x) = L^-1, 0≤ x< L, 0, otherwise. then the pair-wise distance distribution evaluates to, p(r) = L^-2∫_0^L dx (∫_x-L/2^x dy δ(r - (x-y)) + ∫_x^x+L/2 dy δ(r - (y-x))) = L^-2∫_0^L dx (1 + 1), 0 ≤ r ≤ L/2, 0, otherwise = (L/2)^-1 0 ≤ r ≤ L/2, 0, otherwise, for an illustration see Fig. <ref>. Hence, the observed distance distribution f(r) occurring between pairs of locations at distance r on this topology will be proportional to the geometry-independent behavioral part π(r). This technique has been used in <cit.>. §.§ Conditions for the pair distribution function to identify a set of points uniquely According to <cit.>, in general, point configurations can be uniquely determined by their pair distribution distributions, up to a rigid transformation. However. there are counterexamples, for example when two distances are equal. In the context of geographic analysis, for a set of 2D points representing addresses, some distances between points will inevitably be repeated. If we consider 1 × 10^6 points located in a 1000km× 1000km square and whose coordinates are known with a 1 m precision, by a simple combinatorial argument, the number of pair distances is 1 × 10^12. Although the maximum distance in the square is √(2)× 1000 km, or 1.414 × 10^6m, which means that the set of possible distances contains 1.414 × 10^6 different values (due to the limited precision), but there are 1 × 10^12 instances, so some distances must be equal, and the pair distribution function does not uniquely identify a set of 2d points. However, in the case of geography, due to the regularities and scaling law of the pair distribution, one can adopt a coarse-grained view of the problem. Instead of considering each point individually, we can first examine the pair distribution function between urban centers or other areas and iteratively reconstruct the geography (set of 2d points) in iterative. First, each urban center position should be a unique configuration, then independently on the local geography around each city. This should lead to a unique configuration (up to isometries that are of the second order for the pair distribution function). The coarse-grained construction would be similar to the quad-tree partitioning or the HDSCAN clustering (see section <ref>). §.§ Pair distribution function for France The pair distribution function between residential buildings in France shows similar patterns as the one of Denmark, as shown in Fig.<ref>. On the micro-scale (i.e.  within distances of r ≲ 25m, the “neighborhood”). We observe a linear onset of neighborhood density, oscillatory modulated. For larger distances (mesoscale), this growth assumes a scaling of approximately p(r)∝ r^0.67 between 25m and 200m, which is even more pronounced in the distribution of address distances (up to 10000m). The mesoscale has a larger amplitude for France than for Denmark (10km vs 4km)due to larger urban areas (Paris 2853km^2 vs 526km^2 for Copenhagen). After that, at the macro scale, this growth slows down, decays rapidly, and finally approaches zero as we reach the limit of the finite system. § MODELS FOR PAIR DISTRIBUTION FUNCTION §.§ Pair distribution functions §.§.§ Generalized pair distribution function In the main text, we defined the generalized pair distribution function model as p_m(r,R) = Cr/R^2exp(-(r/R)^m). It has a linear onset and a tail that falls as a stretched exponential and is a special case of the generalized Gamma distribution. We find C as follows. Begin with the Gamma function Γ(z) = ∫_0^∞ t^z-1exp(-t)d t and substitute t = (r/R)^m such that d t = (m/R) (r/R)^m-1d r and Γ(z) =m/R∫_0^∞ (r/R)^mz-m+m-1exp[-(r/R)^m]d r. Now we demand mz-1 = 1, i.e. z = 2/m, to find Γ(2/m) = mC^-1∫_0^∞ Cr/R^2exp[-(r/R)^m]d r. Due to the normalization condition, we have C= m/Γ(2/m). To obtain the first moment, we demand mz-1=2 such that z=3/m so we find Γ(3/m) = mC^-1/R∫_0^∞ Cr^2/R^2exp[-(r/R)^m]d r = Γ(2/m)/mR<r> <r> = mRΓ(3/m)/Γ(2/m). We can fit this distribution to data by using the per-sample log-likelihood ℒ = 1/nln L = lnm -lnΓ(2/m) + 2ln w +<ln r> - w^m < r^m >. where w = 1/R and we have an observational set {r_i} of empirical pairwise distances with sample size n. From ∂ℒ/∂ w=0 we find that the inverse city scale that maximizes the likelihood is given by ŵ(m) = (2/m<r^m>)^1/m. With ŵ(m), we can find the zero of ∂ℒ/∂ m = 1/m + 2/m^2ψ(2/m) - <[ŵ(m) r]^m ln[ŵ(m) r]> numerically, which gives m̂. Here, ψ(z)=Γ'(z)/Γ(z) is the digamma function. §.§.§ Circle The pair distribution of a uniform distribution of random points within a disk of radius R is given by p(r,R) = 4r/π R^2arccos(r/2R) -2r^2/π R^3√(1-r^2/4R^2), r ≤ 2R, 0 otherwise, see <cit.>. §.§.§ Parabola Looking at Eq. (<ref>), we see that this distribution looks somewhat close to a parabola with zeros at r=0 and r=2R. A parabolic pair distribution with such properties is given as p(r,R) = -3/4R(r/R)^2 + 3r/2R^2 r ≤ 2R, 0 otherwise. Note that this is the Beta distribution with α=2 and β=2 for random variable x=r/R. §.§.§ Building locations by external and interaction potentials Consider the location of a city as the literal center of interest, for example, the “central business district”. Since it might be attractive for individuals to reach this center as quickly as possible (because amenities will be close to the center), we assume that there is an increased cost of living at a distance | x| from the center. For example, consider having to commute to a job within the central business district, which has a cost that increases with | x|. At the same time, two simple mechanisms will prevent buildings from accumulating in the exact center of the city. First, buildings have a certain average radius z, so they cannot be too close together (at a distance <2z). There is an advantage to buildings not being too far apart, as they can share local amenities. Instead of modeling this explicitly, we simply assume that there is an inherent temperature T in the system, according to which house locations are distributed following an interaction and an external potential. From a statistical physics point of view, we describe the system with the simplest external potential, which increases linearly with distance V^ext( x) = γ| x| where we assume that the origin of the coordinate system is in the center of the city. To model repulsion and attraction between houses, we also assume a Lennard-Jones interaction potential V^int( x_i, x_j) = ε[ (2z/| x_j - x_i|)^12 - 2(2z/| x_j - x_i|)^6 ], which is commonly used to model simultaneous attraction and repulsion between molecules in chemical solutions <cit.>. In total, a system with these properties evolves according to the Hamiltonian ℋ({ x, p}) = 1/2∑_i=1^N p_i^2 + γ∑_i=0^N | x_i| + 1/2∑_i=1^N∑_j≠ i^N V^int( x_i, x_j) with two-dimensional momenta p_i and locations x_i. In a canonical-ensemble formulation of the system, i.e. at constant inverse temperature β, the probability of finding a configuration { x, p} in volume-element d^2 x d^2 p, is given by ϱ[{ x, p}] d^2 x d^2 p = exp[-βℋ({ x, p})] d^2 x d^2 p. For now, we restrict ourselves to an ideal gas with V^int=0, which will allow us to say something about the density of particles around the center of the city, i.e., we want to find the probability of a particle being present p(r)dr. Without loss of generality, we set N=1, because due to ergodicity the trajectory of a particle will eventually follow the density of the whole distribution (think of it as taking the Nth root of the N particle density). Integrating over the momenta yields ϱ[ x] d^2 x = exp(-βγ | x|) d^2 x Changing the variables to polar coordinates, where r is the distance of the particle from the center, we find ϱ[r, ϕ] r dr dϕ = r exp[-βγ r] dr dϕ, i.e. p(r) dr = r exp[-βγ r] dr. This means that if there are no interactions, the distribution of houses around the city center should follow an Erlang distribution with scale parameter λ=βγ=1/R_0, where R_0 is half the city radius. Note that we have postulated that all particles have the same mass m=1, so γ r has the dimension of energy. The definition of the inverse temperature β = 1/T implies that the temperature also has an energy dimension. Relying on the arguments of the kinetic theory of gases, we can relate the temperature to the momenta of a particle with the identity K = N_f N T/2 where K=(1/2)∑_i p_i^2 is the kinetic energy and N_f is the degree of freedom of each particle, i.e. for single-atom particles in two dimensions, N_f=2 (two translational degrees of freedom, no rotations, no oscillations). Note that this relates to the root-mean-square velocity of a single particle as v_0≡√(<v^2>) = √( T/2). In this sense, the instantaneous temperature plays the role of a particle's ability to overcome the potential energy. If V^ext(r)=γ r represents a cost of being at distance r from the center, the temperature gives a measure of how well particles in the system can overcome that cost. If the temperature is low, particles cannot overcome this cost and the density in the city center will be high. If the temperature is high, particles can overcome the cost easily because of the larger amount of kinetic energy available in the system. Increasing complexity by going back to the Lennard-Jones perspective, we want to obtain an intuition about how the radial particle density changes when they strongly repel each other. In the limit of ε/T≫1, particles will have a strong tendency to be found in their respective potential minimum, i.e. at distance 2z from each other. Effectively, we can think of them as hard disks of radius z with a tendency to form clusters. If the temperature is low, the effective radius of the city will be small (remember that the Erlang shape parameter is λ = γ/T = 1/R_0). In this case, it may happen that the number of particles that we would expect to lie within radius r from the center (according to the ideal gas) will be greater than the number of hard disks that can fit within a circle of radius r. When this happens, we expect a crystal to form at the center. The maximum number of disks that can fit within a circle of size r can be approximated by N_disks(r) = θ A_circle(r)/A_disk. Here, A_circle(r) is the area of a circle of radius r, A_disk is the area of a circle of radius z, and θ is an optimal packing fraction, where we can approximately assume θ≈0.9 for optimal hexagonal packing. Then, N_disks(r) = θ r^2/z^2. From the ideal gas distribution, we expect to find N_id(r) = N P(r) = N (1 - e^-γ r/T(1+γ r/T)) within radius r (where P(r) is the Erlang cumulative density function). Following the aforementioned argumentation, we expect a nucleation effect when N_id(r)>N_disks(r) for gr/T≪1. Linearizing the exponential factor, that happens when N_id(r) > N_disks N (1 - (1-γ r/T)(1+γ r/T)) > θr^2/z^2 N(1-(1-(γ r)^2/T^2))) > θr^2/z^2 N(γ r)^2/T^2 > θr^2/z^2 N γ^2 z^2/θ T^2 > 1, or in terms of the reference velocity v_0 and the Lennard-Jones distance d, N γ^2 d^2/8θ v_0^2 > 1. The radius of this cluster will approximately be given by the solution to the equation N_id(r)=N_disks(r), which can be obtained numerically. §.§.§ Molecular Dynamics simulation We are interested in finding configurations 𝒞 that accurately represent the canonical ensemble with number of particles N, constant but irrelevant volume [because we force the particles to be confined within a radially symmetric external potential, the total volume containing the particles does not matter.], and average-constant temperature T with total potential energy of V = ∑_i=1^N [ V^ext( x_i) + 1/2∑_j≠ iV^int( x_i, x_j)]. To this end, we integrate the equations determined by the system's Hamiltonian numerically using the velocity-Verlet algorithm <cit.>. We also rescale the particle velocities according to the stochastic Berendsen thermostat <cit.> with relaxation time τ. To initiate the system in a state of sufficiently low potential energy, we assign initial particle positions according to the corresponding ideal gas ensemble. Then we run the collision algorithm described in Sec.  with collision strength u=1. Moreover, we set the temperature by defining the root-mean-square initial velocity v_0 per particle and assigning a random velocity vector drawn from a two-dimensional Gaussian distribution with standard deviation v_0. As parameters, we choose v_0=6, γ=0.08, N=10^4, Δ t = 0.01, ε=20, z=3, τ=100Δ t. To speed up the numerical integration, we only consider pairs of particles that lie within distance r≤ 6z, found by constructing and querying a k-d-tree for each time step. When calculating the interaction energies, we therefore shift the potential V_LJ so that V_LJ(r=6z)=0. We integrate the equations of motion until t=10^4τ. The energy time series for a single run can be seen in Fig. <ref>. The final configuration of this run is shown in Fig. <ref>. §.§.§ Emulating attractive hard disks We generate a configuration of hard disks with an unspecified attractive force (i.e. we do not explicitly integrate the equations of motion for a hard-sphere interaction potential with an additional attractive force). To do this, we first generate N=10^4 random positions according to a radial Erlang distribution with scale parameter R=675 (i.e. an ideal gas). Then, we draw a random radius for each position from a heterogeneous distribution with power-law tail. We first draw values ẑ_i p(ẑ) = a^2-1/2a×ẑ^a, ẑ≤ 1 1̂/z^a, ẑ > 1 and then assign disk radius z_i = 3 ẑ_i/<ẑ>. We choose a=4 to obtain a heterogeneous distribution with non-finite variance in disk area. Afterward, we run the collision algorithm outlined in Sec.<ref> with collision strength u=1. This leads the initially overlapping disks to take positions where their boundaries touch, i.e. an unlikely configuration to be found in the absence of an attractive force that would cause the disks to lie right at each other's boundaries. An example configuration of this method is displayed in Fig. <ref>. §.§ Collision-resolving algorithm We have N disks with initial positions x_i, radii R_i, and diameters D_i=2R_i, respectively. With the distance vector r_ij= x_i - x_j and distance r_ij=| r_ij|, let ℰ= {(i,j) : (i<j) ∧ (r_ij < max(D_i, D_j)) ∧ (r_ij < R_i + R_j) } be the set of pairs of disks that overlap. This set can be found by iterating over all disks i, finding all neighbors within distance D_i, for instance using a k-d-tree. With this definition, let 𝒥_i = {j: (i,j) ∈ℰ∨ (j,i) ∈ℰ} be the set of disks that overlap with disk i. Then, Δ̃_i = u ∑_j∈𝒥_i r_ij/r_ij(R_i+R_j-r_ij) m_j/m_i+m_j. is the demanded initial displacement, with the displacement rate 0<u≤1 (imagine two overlapping disks—with u=1, the collision would be resolved after one update). The masses m_j control the strength of the displacement. Suppose that disk i has s small mass and a colliding disk j has a great mass. The colliding disk j should move less. Hence, the influence on disk i from disk j should be proportional to m_j. Assuming homogeneous density of all disks, we can set m_i = R_i^2. We also want to avoid that disks move too far per one single update to avoid large jumps. Therefore, we move each disk by the final displacement vector Δ_i = Δ̃_i×min(1,R_i/|Δ̃_i|). i.e. the disk shouldn't move more than its radius per update. We update the whole ensemble of disks step by step until either |ℰ| = 0 or max_i|Δ_i| < ϵ. With a default value of ϵ=10^-10 m. If all disks are of equal radius R, we instead choose the second stop condition as min_(i,j)∈ℰ r_ij > (1-ϵ) 2R and ϵ=10^-3. §.§ Random non-overlap positioning algorithm of disks We want to randomly distribute N hard disks of radii R_i in shape Ω. We start with the largest disk of radius R_max and iterate over all disks in decreasing order of size. For each disk j, we generate a random position x ∈Ω until the condition | x - x_i| > R_j+R_i ∀ j<i is satisfied, i.e. drawing new random positions until there are no overlaps with other, already placed disks. Then, assign x_j← x and continue with the next disk. §.§ Influence of building location definition on pair distribution In the dataset, the location of a building is defined as the location of the building's entrance door. We want to check how the Lennard-Jones ensemble's pair distribution and the pair-correlation function g(r) change when the location of a building is not associated with its center. To this end, we take the configuration of Lennard-Jones disks as shown in Fig. 2d in the main text and redefine a disk's location to be (i) randomly within the disk and (ii) randomly on the rim of the disk. The resulting pair distribution and g(r) are shown in Fig. <ref>. We see that the sharp peak almost disappears for both. At the same time, the onset of g(r) becomes less abrupt, approaching a shape similar to that observed in the data. §.§ Measuring the pair distribution function for large dataset Calculating the pair distribution function for large datasets requires a substantial amount of memory. The number of different pairs among N points is N(N-1)/2. Consequently, for the dataset of 34 million address coordinates within France (Fig.<ref>), we need to store 10^15 distance values in memory, which is equivalent to 4,000 terabytes of data using 32-bit floats. Such a large amount of data storage is practically unfeasible. To overcome this computational hurdle, we proceed in two steps using a k-d-tree. §.§.§ Small distances Let 𝒟 be the two-dimensional, non-contiguous shape containing every building (or address, respectively) in Denmark and δ the set of buildings (addresses) i with locations x_i∈𝒟. For each predefined subset of this form Ω⊆𝒟 (for example, the official boundaries of a city), we iterate over all building (address) positions ω = {i ∈ D: x_i∈Ω} located within Ω and compute the distances to all buildings (addresses) j∈δ∧ j≠ i with 0<|x_i-x_j|<r_max=200 m. That is, for each predefined subset of 𝒟 (e.g. city), we find all distances of every one of its buildings with respect to all buildings (addresses) that lie within radius r_max, not just to those that also lie within its boundaries. We compute this histogram with a resolution (i.e. bin width) of 1 m. For this task, we use a k-d-tree on all locations of δ. For each city i in Denmark, let Ω_i be the shape that is defined by its administrative boundaries. Note that none of the Ω_i overlap. We define as Ω = 𝒟\(⋃_iΩ_i ) the shape that includes land that is not associated with a city. Thus, iteration over all Ω_i and Ω allows us to analyze the small-scale structure of every city, every building (address) that is not located in a city, and - by combining all these histograms - of every building (address) in Denmark. We inferred the mesoscale scaling parameter α by fitting p(r)=C r^α against the respective empirical pair distribution with 1m resolution in the range r∈(25 m,200 m), using least squares. We find <α>=0.69 and Std[α] = 0.04 for the building pair distribution s of the 30 largest cities as well as <α>=0.67, and Std[α] = 0.05 for addresses, cf. Fig. <ref>. Here, “largest” refers to the number of registered residential buildings with locations within the administrative boundaries of the city. For all buildings in Denmark, we find α=0.67 and for all addresses, we have α=0.69. Example analyses can be seen in Fig. <ref> for buildings and in Fig. <ref> for addresses. Note that the local environment around buildings and addresses that are not within a city/town boundary grows much slower with α=0.30 for both. §.§.§ Larger distances To compute the pair distribution for larger distances, we proceed as follows. For each shape Ω_i (and 𝒟, respectively), we find the set of its building (address) locations ω_i (and δ, respectively). From this set we sample n=min(|ω_i|,3×10^4) unique locations (without replacement). Then, we find the pairwise distances of all pairs of these sampled locations and bin them to find histograms. Note that to obtain the country-wide pair distribution displayed in Fig. 2a in the main text, we combine the respective pair distributions from the small-distance and large-distance analyses by requiring that they take the same value at r=184.5 m. We show the pair distributions for buildings and addresses for the 100 largest Danish cities in Fig. <ref> and <ref>. In Fig. <ref> we show the empirical and fit pair distributions of all Danish cities with more than 30 buildings as well as the distributions of the inferred values of R and m, respectively. The tail of the radius distribution R scales as p(R)∝ 1/R^3.29, with x_min = 824 m, inferred by the MLE technique in refs. <cit.>. The tail decay (r/R)^m has values of m with <m> = 1.61 and Std[m]=0.44. §.§.§ Between cities To compute the pair distribution and g(r) between cities, we define the `city center` as the centroid of a city's (multi-) polygon, i.e. its geometric center: the center of mass of its shape. Weighting each inter-city distance r_ij by the number of pairs of buildings m_im_j it contains, we find a pair distribution that approximates the empirical building pair distribution (see Fig. <ref>). This pair distribution has a clear peak in g(r), suggesting an abstract attractive force between cities of a certain size. However, the onset of g(r) is consistent with the weighted pair distribution of the hard-disk model configuration. §.§ Pair distribution from independent patches of heterogeneous size We consider a multiverse consisting of an infinite amount of universes indexed by i, each of which is inhabited by a patch of size R_i with N_i buildings inside, contributing to the multiverse with a pair distribution number density of n(r,R_i) ≈ N_i^2 p(r,R_i,m) (each universe contributes a number of building pairs amounting to N_i^2). Furthermore, we assume that there is a constant population density, which is proportional to the number of individuals per unit area, and that this density is inversely proportional to the square of the radius, ρ_0∝ N/R^2. This implies that the number of individuals in a given area is proportional to the area itself, N∝ R^2. For each of these universes (or rather, each of these patches), we calculate the number of building pairs at distance r and assume that the patch sizes R are distributed according to some distribution f(R), which is currently unspecified. The joint distribution of house pairs at distance r is then given by n(r) ∝∫_0^∞ dR n(r,R) f(R) ∝∫_0^∞ dR f(R) R^4 r/R^2exp(-(r/R)^m) . To simplify the integral, we change variables to β = 1/R, such that dR = -β^2dβ and n(r) ∝∫_0^∞ dβ f(β^-1) r/β^4exp(-(β r)^m). The integral in question has a solution in the case where f = β^4 - α with α < 1. We assume that this is the case, the integral now becomes, n(r) = C ∫_0^∞β^-α r exp(-(rβ)^m) dβ where C is a normalization constant. We begin with the Gamma function Γ(z) =∫_0^∞ t^z-1exp(-t)dt. This integral converges for z>0 if z∈𝐑. Substituting t=(rβ)^m yields d t = m r^mβ ^m-1dβ and therefore Γ(z) =∫_0^∞ (rβ)^mz-mexp(-(rβ)^m) m (rβ)^m β^-1dβ =mr^mz-1∫_0^∞β^mz-1 rexp(-(rβ)^m) dβ =mr^-α C^-1C∫_0^∞β^-α rexp(-(rβ)^m) dβ. Here, we introduce z = (1-α)/m, which, due to the lower bound of z > 0, leads to an upper bound of α < 1. We recognize the integral n(r) on the right and therefore find n(r) ∝ r^α C Γ(1-α/m). If the radius of patches in the multiverse were distributed according to the function f(R) ∝ R^-(4-α), the initial growth of the joint pairwise distribution would follow a sublinear power law. Notably, the scaling exponent would not depend on the tail parameter m. Let us consider the implications of this for the area of the patches. The area would scale as A∝ R^2 (or R∝√(A)), indicating that the distribution of the area would follow f̃(A) = dR/d A f(R=√(A)) ∝1/A^(5-α)/2. In the data, we observe α=0.67. That means that the area of the patches would have to be distributed according to a power law with exponent μ=-(5-α)/2≈ -2.17, which is well within what has been found empirically for cities <cit.>. Furthermore, our city radius inference analysis (cf. Fig. <ref>) indicates that f(R)∝1/R^3.29, which leads to μ=-2.15. This demonstrates that the results of these two separate analyses are consistent. As previously demonstrated, the sublinear scaling exponent α is independent of the tail parameter m. We extend our analysis by using the parabola pair distribution model Eq. <ref>. We have n(r) ∝∫_0^∞ dR n(r,R) f(R) = ∫_r/2^∞ dR f(R) R^4 (-3/4R(r/R)^2 + 3r/2R^2) . Here, the lower bound in the integral comes from the condition that r≤ 2R. Now, as above, we demand f(R)∝ 1/R^4-α to find n(r) ∝∫_r/2^∞ dR R^α(-3/4R(r/R)^2 + 3r/2R^2) = ∫_r/2^∞ dR (-3r^2/4R^3-α + 3r/2R^2-α) ∝ r^α. We solve the respective integrals n(r) for the circle, generalized pair distribution, and parabola pair distribution models numerically and find that the above derivation holds for all three (see Fig. <ref>). §.§ Pair distribution of a self-similar modular hierarchical model of building locations One limitation of the multiverse approach is that it is only applicable if the patches are truly independent or sufficiently separated so that the scale of the pair distribution number density n(r) does not affect the outcome. However, it is plausible that patches may be in close proximity to each other. This is supported by findings in <cit.>. Additionally, they discovered that a collection of these patches forms a fractal, or self-similar structure. While it is possible to demonstrate that a fractal dimension of building location does not necessarily result in sub-linear growth of the pair distribution function, it is certainly possible to investigate the consequences of such self-similar patch location on it. We assume a self-similar modular hierarchical structure comprising patches of buildings. We start with a single unit of b buildings, each of which has a radius z_1. These are located within a patch of size R_1. Our objective is to regulate the building density in such a way that the number of buildings in a patch is given by b = θ (R_1^2/z_1^2). Here, θ represents the packing fraction. This implies that the patch radius is computed as follows: R_1 = z_1√(b/θ). Now, consider that there are b of these patches of radius R_1, located in a larger patch of higher order, which is (self-)similar to the basic patch. We posit that each of the lower-order patches has a radius of z_2=R_1, while the higher-order patch has a radius of R_2=z_2√(b/θ). Subsequently, we add b-1 similarly constructed patches to form an even larger patch. This entails constructing a self-similar structure of patches where the size of each patch of hierarchical order k is given by R_k = z(b/θ)^k/2 and the size of each sub-patch of a patch is given by z_k = z_1(b/θ)^(k-1)/2. Given a maximum number of L orders (layers), the total number of houses is eventually b^L. In each layer k, there are b^L-k patches of order k. Now assume that for each patch i of order k and location s_i,k, we distribute the location of its b sub-patches randomly within this patch such that the pair distribution function of sub-patches leads to a pair distribution p(r, R_k of scale R_k. To prevent the patches from overlapping excessively, a collision algorithm is employed for the b sub-patches (preventing collisions between disks with radius z_k). We repeat this process recursively for each patch until the depth of the N-ary hierarchy tree reaches L. The leaves of the tree represent buildings. As these buildings will overlap, another collision algorithm is run until they no longer overlap. We now estimate the joint pairwise-distance distribution of the entire structure. Consider a container patch that contains b sub-patches. If we assume that the total the B buildings within a sub-patch of scale k-1 are sufficiently concentrated within its center, the contribution of this patch of radius R_k will be proportional to the pair distribution p(r,R_k), weighted with the total number of pairs of buildings within this container-patch, except the pairs of buildings within the same sub-patch. Consequently, the number of pairs therefore scales as ∝ B^2 b(b-1), or, neglecting the linear contribution, as ∝ (bB)^2. For each patch in layer k, there are going to be B=b^k-1 buildings in a sub-patch. Therefore, the number of pairs of buildings in a patch of scale k grows as ∝ b^2k. Each of these patches contributes to approximately b^2k pairs to the joint pairwise-distance distribution. There are b^L-k patches of order k with radius R_k. Therefore, the total contribution to the joint pairwise-distance distribution of this scale is n_k(r, R_k) ∝ b^k p(r, R_k) = b^k p(r, z(b/θ)^k/2) where p(r,R) depends on how sub-patches are distributed within patches. The total pair distribution number density is given by n(r) = ∑_k=1^L b^k p(r, z(N/θ)^k/2). An explicit form using the generalized pair distribution function model is n(r) ∝∑_k=1^L b^k r/R_k^2exp(-(r/R_k)^m) . = r/z^2∑_k=1^Lθ^kexp(-(r/z)^m (θ/b)^km/2). We compute this equation numerically for varying θ, b, and different models for p(r,R). Figure  <ref>,illustrates examples for b=4 and θ=0.8, which leads to a scaling of approximately ∝ r^0.67. To derive the dependence of the exponent of the observed scaling law on the parameters, we use the saddle-point approximation. First, we approximate the sum over hierarchy layers with an integral over a constant hierarchy layer density n(r) ∝ r ∫_1^∞ dk exp[ klnθ -(r/z)^m exp(mk/2lnθ/b) _=-w(k)]. Relying on the saddle-point method, we can approximate the integral to find n(r) ∝ r exp[-w(k_0)]/√(w”(k_0)) where k_0 is the minimum of w(k). We compute the derivatives w'(k) = -lnθ + (r/z)^m m/2lnθ/bexp(mk/2lnθ/b) w”(k) = (r/z)^m (m/2lnθ/N)^2 exp(mk/2lnθ/b) and with w'(k_0)=0 find the following equations for the minimum exp(mk_0/2lnθ/b) = lnθ/ln(θ/b)(z/r)^m 2/m k_0 = 2/m1/lnθ/b[ ln(lnθ/ln(θ/b)) + m( ln z - ln r) + ln(2/m) ]. Using the first of the two equations we note that for w(k_0), the dependence on r cancels out in the second term and so -w(k_0(r)) = k_0(r)lnθ - W where W is an irrelevant constant. Second, we use the same equation to see that w”(k_0) does not depend on r and therefore does not concern us any further either. As a step in between, this means that n(r) ∝ r exp[k_0(r)lnθ]. Looking at Eq. (<ref>), we see that the only r-dependent term gives the minimum a structure of k_0 = K - 2/ln(θ/b)ln r, and therefore we find n(r) ∝ r exp[- 2lnθ/ln(θ/b)ln r] = r^1- 2lnθ/ln(θ/b), i.e. the exponent of the sub-linear growth in the pair distribution function is given as α = 1- 2lnθ/ln(θ/b). We can compare this estimation with numerical results. Above we used N=4 and θ=0.8 to find α≈ 0.67. Eq. (<ref>) yields α=0.72, an acceptable approximation. Notably, this result is also independent of the parameter m, which represents the explicit form of the pairwise distance distribution per patch. This explains why similar results are obtained despite the use of different geometries. We can perform the same analysis for the parabola pair distribution model Eq. (<ref>). We have n(r) ∝∑_k=1^L b^k/R_k^2(-3/4r^2/R_k + 3r/2) = r/z^2∑_k=1^L θ^k (-3/4r/R_k + 3/2). Again, we use the saddle point method to approximate n(r) ∝ r exp[-w(k_0)]/√(w”(k_0)), with k_0 being the minimum of the function w(k) = -klnθ - ln(-3/4r/zθ^k/2/b^k/2 + 3/2). We arrive at n(r)∝ r θ^2 ln(4 z ln(θ)/r (2 ln(θ) + ln(θ/b)))/ln(θ/b) 3 √(π)ln(θ/b)/z^2√((2 ln(θ) + ln(θ/b)) ln(θ))(2 ln(θ) + ln(θ/b)). Leaving aside the irrelevant prefactors, we have ln n(r) ∝ln r + lnθ(K - 2ln r/ln(θ/b)) ∝ln r (1-2lnθ/ln(θ/b)), i.e. the same result as with the generalized pair distribution. § STUDYING MOBILITY WITH THE PAIR DISTANCE FUNCTION §.§ Comparison with other Mobility models §.§.§ Gravity The gravity model as originally introduced in <cit.>, states that the number of moves T from area i to j is proportional to, T_i → j∝N_i N_j/d_i,j with N_i the population at location i and d_i,j the distance between locations i and j. §.§.§ Radiation In this section, we derive a continuous radiation model and demonstrate how it differs from our continuous gravity model. The radiation model of human mobility is usually represented by the following equation <cit.>, P_ij∝m_i m_j/(m_i + s_ij)(m_i + m_j + s_ij) with P_ij the predicted number of people traveling from location i to location j, m_k the population at the origin location k, s_ij represents the total population in a circle centered at i with radius equal to the distance between i and j, excluding the populations of i and j. Following <cit.>, the probability of having a single particle emitted from location i to location j is, P(1| m_i,m_j,s_i,j) = ∫_0^∞dz P_m_i(z) P_s_i(<z)P_m_j)(>z) with, P_m_i(z) = d P_m_i(<z) /d z = m_i p(z)^m_i-1d p(<z)/d z if m_i=1, then it simplifies to P_1(z)=d P(<z)/dz. Equation <ref> simplifies to, P(1| 1,1,s_i,j) = ∫_0^∞ (1-P(<z)) P(<z)^s_i,jd P(<z) = 1/(1+s_i,j)(2+s_i,j) In the general case when s_i,j≫ 1, that is, when the number of houses in the circle of center i and radius the distance i and j is greater than one, which is often the case when the distance is sufficiently large (see Fig. <ref>), P_ij = 1/(1 + s_ij)(1 + 1 + s_ij)∼ s_ij^-2. However, the number of addresses in the circle s_i,j of center i and radius R(i,j) is closely related to the primitive of the pair distribution function, s_x,y = ∫_Ω dz p^d(z) δ{R(x,z)<R(x,y)}, s(r) = ∫_x dx ∫_y dy s_x,y p^d(x) p^d(y) δ(r-R(x,y)), s(r) = ∫_x dx ∫_y dy s_x,y p^d(x) p^d(y) δ{r>R(x,y)} In the meantime, the cumulative distribution of pair distribution is (with p the pair distribution function), Φ(r) = ∫_0^r p(s)ds, Φ(r) = ∫_0^r ds ∫_Ω d x ∫_Ω d y ^d( x) p^d( y) δ(s - R( x, y)), Φ(r) = ∫_Ω d x ∫_Ω d y p^d( x) p^d( y) δ{r>R( x, y)}, Φ(r) = s(r). If we come back to the derivation of the pair distribution, f(x,y)d^2xd^2y = π(x,y) ϱ^d(x) ϱ^d(y) d^2xd^2y. In our continuous gravity law model, we assume that π(x,y)=π(R(x,y))=π(r). On the other hand, the radiation model assumes that, π(x,y) = s(x,y)^-2= Φ(x,y)^-2 If we assume that on average, s(x,y) ≃∫_0^|x-y|dr p(r) = Φ(r). Then the two models will agree on average if, Φ(r)^2 = r The pair distribution function would be p(r)=± r^-0.5. While the empirical pair distribution (Fig.2a) exhibits more complex patterns than this, and the two models appear to be irreconcilable. §.§ The intrinsic distance cost as the inverse of distance Consider an individual situated at location i. The utility associated with moving to location j can be expressed as follows, U(i,j) = V(j) - C(r(i,j)) where V(j) represents the inherent values of location j and C(r(i,j)) is a function of the distance r(i,j)) between i and j. The probability of selecting location j can be calculated using the multinomial logit model, as outlined in <cit.>, f(i,j) = e^ U(i,j) /∑_ke^ U(i,k) The probability to move of a distance r is the sum over all locations k for which r(i,k)=r. This probability becomes independent of i. Thus, the probability of moving of a distance r aggregates over all locations i as f(r) = ∑_i f(i,r)f(i). For simplification, let us that f(i) is constant, implying that the probability of movement from any given location is equal. Furthermore, let us assume that for all i,j ∈Ω, f(i,r) = f(j,r). In this case f(r) ∝ N f(i,r). Moreover, the number of locations k satisfying r(i,k)=r corresponds to the pair distribution function, p(r), up to a constant, with this constant being the total number of locations. f(r) = ∑_i∈Ω∑_k| r(i,k)=r e^ U(i,k) /∑_ke^ U(i,k) Assuming a uniform value V(k) across all locations, we derive: f(r) = ∑_i∈Ω∑_k| r(i,k)=r e^ V - C(r(i,k)) /∑_ke^ U(i,k) ∼p(r) e^ V -C(r) with p(r) the pair distribution function evaluated at r. Given the human tendency to perceive distances logarithmically, especially for larger magnitudes, we posit that the cost function C(r) ∝log(r). This hypothesis is supported by the observation that humans might perceive numerical differences in a logarithmic fashion, such as perceiving the difference between 1 and 2 similarly to the difference between 10 and 20 or even 100 and 200, as demonstrated in <cit.>. Consequently, upon normalization by the pair distribution function, we derive the equation f(i,r)/p(i,r)∼ e^ V - log(r) = e^log(V_2) - log(r) = V_2/r where V = log(V_2). This leads to f(i,r)/p(i,r)∼1/r which is equivalent to equation (<ref>) of the intrinsic distance cost, given π(r)=1/r. §.§ Reconciling Opportunity and Distance-Based Mobility Paradigms As highlighted in <cit.>, in classical mobility studies, two divergent theories explain the movement of individuals. The first, inspired by Newton's law of gravity, suggests that mobility decreases as the physical distance between locations increases, often modeled as a power law of distance <cit.>. The second theory argues that mobility is not directly related to distance but to intervening opportunities. It suggests that individuals prioritize locations based on the availability of closer opportunities rather than distance itself, leading to movements driven more by the opportunity distribution than by travel constraints <cit.>. The gravitational model describes how the flow of individuals between two locations decreases as the distance between them increases, analogous to the decrease in gravitational pull with distance in Newton's theory. This decline in mobility is modeled using functions such as exponential or power laws <cit.>. In this framework, the population sizes of the start and end locations act like masses, attracting travel in direct proportion to their size and inversely to the square of the distance between them. Known for its simplicity and ease of computation, the gravity model is widely used in various fields such as migration, intercity communication, and traffic flow analysis. It can also be extended by integrating socio-economic characteristics of locations to improve its accuracy and applicability <cit.>. Conversely, the intervening opportunity model emphasizes that mobility is mainly driven by the availability of intervening opportunities rather than distance <cit.>. It suggests that individuals are influenced more by the availability of nearby opportunities than by the distance to distant opportunities, with the spatial distribution of these opportunities dictating destination choices, making distance a less critical factor. The radiation model <cit.>, an extension of this theory, streamlines the analysis by assuming that the chosen opportunity is the optimal one. This model correlates opportunity density with population density and provides a mathematical formula for predicting trip endpoints. The model has also been extended to a continuous spatial framework, providing a nuanced understanding of how social factors influence movement patterns <cit.>. Our framework provides a unifying approach to the intervening opportunity model and distance-based model In the equation <ref>, f(r) ∝π(r)p(r). The pair distribution function p(r) captures the available opportunities to move, when we divide the moving distance distribution by the pair distribution, we normalize the empirical mobility traces by their intervening opportunities. Consequently, the remaining quantity is found to be dependent on the distance, with the result that pi(r)=f(r)/p(r)≃ 1. This can be interpreted as a distance term analogous to the gravity model. Furthermore, our framework explain why intra-city movements are better fitted by a gravity model with an exponential decay, which is similar to the exponential decay of population density <cit.>. §.§ Local piece-wise power law In this section, we investigate whether the continuous gravity model still holds at the scale of cities. The observed moving distribution f(r) can be decomposed into the observed moving distribution of each city c_i in the set 𝒞 of cities in Denmark, f(r) = ∑_c_i ∈𝒞 f_c_i(r), where f_c_i(r) only counts moves originating from city c_i, i.e. f_c(r)=∫_x ∈ cdx ∫_0^∞dy f(x,y). For each city's observed moving distance the appropriate normalization is not the country-wide pair distribution but the relative pair distribution function, which counts pairs with at least one location in the city of interest. We define, p_c(r) = ∫_x ∈ cdx ∫_y ∈Ωϱ^d(x) ϱ^d(y) dxdy. We can now compute the relative intrinsic distance cost, π_c(r) = f_c(r)/p_c(r). In the main manuscript Figures.3f-g show that the relative intrinsic distance cost does not follow the continuous gravity law and leads to a piece-wise intrinsic distance cost, Figures <ref>-<ref> show the same process for many more cities. The piece-wise intrinsic distance cost is a piece-wise power law distribution of the form: π_c(r) = C_1 (R_c/r)^β, 0 ≤ r ≤ R_c C_2 (R_c/r)^α, r > R_c where R_c is the mobility inferred city radius, α>1, β∈ℝ, and x ∈ℝ^+. The three parameters α, β, r_c are estimated by maximum-likelihood and compared to a log-normal, Pareto, and exponential Pareto distribution <cit.>. The likelihood ratios and p-values for all cities are reported in figures <ref>-<ref>. We obtain that on average, β=0.60± 0.20, α=2 ± 0.21, R_c follow a heavy-tailed distribution between a power law and a log-normal distribution. §.§.§ The remarkable case of islands In the section, we highlight the remarkable point that the piece-wise two-step intrinsic distance cost for cities still holds remarkably well for islands (Fig.<ref>). In the case of islands, the inter-city part of the piece-wise process (when the exponent is on average 2), it mostly across the sea, and therefore the relative pair distance distribution is null. However, if the distance is large enough to reach the nearest landmass, the pair distance distribution is positive again and the 2 exponent can be interpolated between the island and the nearest landmass (see Fig.<ref>). §.§ Simulation of piece-wise intrinsic distance cost In this section, we try to recover the intrinsic distance cost, π(r)=1/r from the aggregation of the piece-wise process of equation <ref>. We can decompose the intrinsic distance cost as follows, π(r) = f(r)/p(r) = ∑_c ∈𝒞f_c/p(r) = ∑_c ∈𝒞π_c(r)p_c(r)/p(r) The only term that is unknown in this equation is p_c(r) the relative pair distance distribution function for each city. However, we have already shown that the pair distribution of cities follows a generalized gamma distribution within the city boundary. Beyond the city boundary, we have shown that the positions of cities follow an ideal gas (i.e. random position but no overlap), which implies that we can model the relative pair distribution function with a linear growth beyond the city radius. The relative pair distribution function of a city, c, is thus, p̂_c(r) = C_1 (r/R_c^2)exp(-(r/R_c)^m) , 0 ≤ r ≤ R_c C_2 r, r > R_c where C_1 and C_2 are two constants such that p̂_c is continuous. To obtain the global pair distribution we need to weight each relative pair distribution by its contribution, which is equal to the number of addresses. We assume that the number of addresses is proportional to the square of the radius and p_c= R_c^2 p̂_c. For simplicity, we also assume that the general pair distribution function p follows, p(r) = C_3 r^2/3 , 0 ≤ r ≤ R_max C_4 , r > R_max where R_max= max{R_c , c∈𝒞} the largest city radius. The form of the pair distribution function closely follows the empirical one of Figure 2a. Therefore, we obtain the intrinsic distance cost when aggregating over the city scale, π_c(r)p_c(r)/p(r) = C_5 (r^-1/6/R_c^2)exp(-(r/R_c)^m) , 0 ≤ r ≤ R_c C_6 r^-5/3 , R_c < r ≤ R_max C_7 r^-1 , r > R_max Remarkably, the inverse of distance scaling of the intrinsic distance cost for r≥ R_maxemerges as a sole consequence of the random distribution of city locations. The scaling r≤ R_max results from the aggregation over the scale of cities, akin to our patch model SI. <ref> or <cit.>. We obtain that for r≤ R_max, π(r) = ∫_0^r C_5 (r^-1/6/R_c^2)exp(-(r/R_c)^m) g(R) dR + ∫_r^∞ C_6 r^-5/3 g(R)dR where g is the distribution of the city radius. From the main text Fig.3, g can be modeled as a log-normal or a power-Pareto distribution. We solve the equation <ref> numerically by simulating over the city intrinsic distance cost over an artificial geography of Denmark (eq.<ref>,<ref>). We recover the intrinsic distance cost π(r)= 1/r (Fig.<ref>). § THE DEFINITION OF CITY The precise demarcation of a city — where a city ends and its boundaries — is a pressing issue in academic literature <cit.>. Depending on the definition adopted, different results can be obtained <cit.>. To ensure that our results are not just a byproduct of our chosen city definition and that they could potentially be applied to other definitions, we evaluated the robustness of the city boundaries. First, we adopted the city definition provided by Danmarks Statistik <cit.>. According to this definition, Denmark consists of 1,473 cities. To ensure more robust results, we compare the city definition to cities defined by density-based clustering techniques. These methods distinguish densely clustered data points that represent urban or urbanized areas from sparse or noisy regions that typically correspond to rural or sparsely populated areas. DBSCAN is particularly well-suited for this task because of its innate ability to delineate clusters of different geometries, which is critical for accommodating the non-uniform shapes of urban regions. This algorithm sorts data points into core points, boundary points, and noise based on surrounding data density. However, one of the main challenges is to determine the optimal values for ϵ (neighborhood search radius) and minimum data points, as these parameters can determine the level of detail of the identified urban zones. <cit.> . For data covering cities with different population densities, HDBSCAN stands out. Building on the foundation laid by DBSCAN, HDBSCAN employs a hierarchical tactic that makes the fixed ϵ value redundant. By examining the density hierarchy, HDBSCAN can differentiate between densely populated cities and smaller towns within the same data collection. This granularity, coupled with the flexible clustering results, provides a detailed view of the boundaries of cities <cit.>. Copenhagen, however, presents a unique scenario. Its urban influence extends well beyond its administrative boundaries, as evidenced by phenomena such as commuting patterns. As a result, neighboring cities have been merged with Copenhagen to form the Copenhagen metropolitan area, or Hovedstadsområdet, as defined by DST. Notably, the clustering results from HDBSCAN reflected this merging, indicating the combined urban sprawl of the Copenhagen region. Normalized Mutual Information (NMI) and Adjusted Mutual Information (AMI) quantify the similarity between two clusterings. Both metrics measure the information shared between the true labels and the labels assigned by a clustering algorithm. Therefore, we can use them to evaluate the performance of clustering algorithms in the absence of ground truth labels. The NMI is defined as the mutual information between two clusterings divided by the geometric mean of their entropies <cit.>. However, a limitation of the NMI is that it ignores the random grouping of clusters, i.e. random cluster assignments can produce a non-zero NMI value. The AMI, on the other hand, corrects for this limitation by adjusting the score to account for chance, ensuring that random cluster assignments result in an AMI score close to zero <cit.>. Therefore, the AMI provides a more accurate representation of the similarity between two clusterings in our case where the number of clusters is not fixed. The table <ref> shows the value of the indices for the two algorithms. An NMI/AMI score of 0.9 indicates that the two clusterings share a significant amount of information. Such a high score typically indicates that the two clusterings are almost identical, with only a few data points potentially clustered differently. Therefore, the city definition from Danmarks Statistics is substantially similar to the one we obtain using density-based clustering techniques. Furthermore, table <ref> shows that merging Copenhagen into the Hovedstadsområdet is a better definition of a city according to the density-based clustering techniques. The parameters for HDSCAN were chosen to optimize the NMI between the clustering and the real city labels. Figure <ref> shows the values for different values of minimum samples and minimum cluster size, the rest of the parameters follow the default values of this implementation <cit.>. Figure <ref> shows all addresses colored by the cluster they belong to. The background color is the official city border. § MAXIMUM LIKELIHOOD ESTIMATION OF POWER LAWS Fitting the statistical power-law model, to examine the distance distribution of human movement, we fit a truncated power law with the form, p(x) ∝ x^-alpha e^-λ x. where α is a constant parameter of the distribution (known as the scaling parameter or exponent), x is the travel distance (x > x_min>0), and λ is the parameter of the exponential distribution. x_min represents the shortest distance above which the power law scaling relationship begins. The scaling parameter α must be estimated before finding the optimal values of x_min. The methods of <cit.> find x_min by generating a power-law fit starting from each unique value in the data set, then selecting the one that gives the minimum Kolmogorov-Smirnov distance between the data and the fit. For any given value of x_min, we estimated the scaling parameter using maximum likelihood estimation. The goodness of fit for the truncated power law distribution was considered in comparison to the fit of other distributions (e.g., power law, exponential, and lognormal). Although some of the power laws present a more complicated picture, we then used the methods developed in this <cit.>, which carefully extend <cit.> procedures for a wider class of distributions. unsrt
http://arxiv.org/abs/2405.09214v1
20240515094715
Hypergraph C*-algebras
[ "Mirjam Trieb", "Moritz Weber", "Dean Zenner" ]
math.OA
[ "math.OA" ]
Exploring Ground States of Fermi-Hubbard Model on Honeycomb Lattices with Counterdiabaticity Xi Chen ============================================================================================ We give a definition of hypergraph C^*-algebras. These generalize the well-known graph C^*-algebras as well as ultragraph C^*-algebras. In contrast to those objects, hypergraph C^*-algebras are not always nuclear. We provide a number of non-nuclear examples, we prove a Gauge-Invariant Uniqueness Theorem for a subclass of hypergraph C^*-algebras and we study moves on hypergraphs which generalize the moves in the theory of graph C^*-algebras. § INTRODUCTION In 1980, Cuntz-Krieger algebras <cit.> were introduced which were later extended to the more general class of graph C^*-algebras. Given a graph, the idea is to associate a universal C^*-algebra generated by projections corresponding to the vertices and partial isometries corresponding to the edges, satisfying certain relations coming from the graph, see below. Graph C^*-algebras form a huge and very important class of examples in the theory of C^*-algebras and they are very well understood. We refer to <cit.> for an overview on this topic. Recently, Eilers, Restorff, Ruiz and Sørensen <cit.> classified graph C^*-algebras by means of K-theory. Their result is a big breakthrough in the theory of graph C^*-algebras. There have been several attempts to generalize graph C^*-algebras. In particular, we want to mention the work by Tomforde <cit.> on ultragraph C^*-algebras. Furthermore, there is related work on higher rank graph C^*-algebras <cit.>, Exel-Laca algebras <cit.>, Cuntz-Pimsner algebras <cit.>, topological graph C^*-algebras <cit.>, edge separated graph algebras <cit.>, Quantum Cuntz-Krieger algebras <cit.>, just to name a few. In this article, we propose a definition of hypergraph C^*-algebras (note that Fritz <cit.> also gave a definition of a hypergraph C^*-algebra which is completely different from ours). A continuation of the present article is the work by Schäfer and the second author <cit.>, on nuclearity aspects, and the work by Faroß <cit.> on quantum symmetries. Our objects are in their syntax relatively close to graph and ultragraph C^*-algebras. A (finite) hypergraph HΓ=(V,E,r,s) is given by a (finite) set of vertices V, a (finite) set of edges E and source and range maps s,r:E→𝒫(V)\{∅}, where 𝒫(V) is the powerset of V. In this framework, a graph is a hypergraph with the property | s(e)|=| r(e)|=1 for all edges e∈ E. In other words, when passing from graphs to hypergraphs, we replace the set V for s,r:E→ V by the powerset 𝒫(V)\{∅}. An ultragraph is a hypergraph with the property | s(e)| =1 for all edges e∈ E (and | r(e)|≥ 1). In this article, we mostly restrict to the case of finite hypergraphs. Here is an overview of the definitions of graph C^*-algebras, ultragraph C^*-graphs (in our language) and our definition of hypergraph C^*-algebras: Given a finite hypergraph HΓ=(V,E,r,s), we consider the universal C^*-algebra generated by mutually orthogonal projections p_v, v∈ V and partial isometries s_e, e∈ E such that the following relations hold. graph C^*-algebra ultragraph C^*-algebra hypergraph C^*-algebra 2l|restrictions on r and s | s(e)|=| r(e)| =1 | s(e)|=1 none Relation (1) ∀ e,f∈ E s_e^*s_f=δ_efp_r(e) s_e^*s_f=δ_ef∑_v∈ r(e)p_v s_e^*s_f=δ_ef∑_v∈ r(e)p_v Relation (2a) ∀ e∈ E s_es_e^*≤ p_s(e) s_es_e^*≤∑_v∈ s(e)p_v Relation (2b) ∀ v ∈ V:s^-1(v)≠∅ p_v =∑_e∈ E s(e)=vs_es_e^* p_v ≤∑_e∈ E v∈ s(e)s_es_e^* p_v ≤∑_e∈ E v∈ s(e)s_es_e^* A generalization of graph C^*-algebras is desirable for various reasons: * Graph C^*-algebras behave nicely with respect to their combinatorial data – much of the structure of the C^*-algebra may be read from the underlying graph. Hence, extending the class by generalizing slightly the underlying combinatorial objects promises to produce a class with similar nice behaviour. * The class of graph C^*-algebras does not satisfy the “2 out of 3” property: Given a short exact sequence of C^*-algebras, two of which are graph C^*-algebras, the third one might fail to be in that class. * The class of graph C^*-algebras restricts to nuclear C^*-algebras. * The class of graph C^*-algebras restricts to C^*-algebras whose K_1-groups are free. * Since the class of graph C^*-algebras has been completely classified, it is time for the next, larger class. The article is organized as follows. Section <ref> contains preliminaries on universal C^*-algebras and on graph C^*-algebras. In Section <ref>, we define hypergraph C^*-algebras (for finite hypergraphs, but as a side remark also for infinite hypergraphs), we show that they generalize graph C^*-algebras and ultragraph C^*-algebras, we show that they are always unital, we give a number of easy examples, we discuss the problem of defining paths in hypergraphs and we study the decomposition of ranges. In Section <ref>, we give a number of examples of non-nuclear hypergraph C^*-algebras such as C(S^1)*ℂ^n and 𝒪_m *ℂ^n, and we show how to construct a number of further such examples. Recall that all graph C^*-algebras and all ultragraph C^*-algebras are nuclear, so the class of hyperclass C^*-algebras is substantially larger. In Section <ref>, we investigate gauge uniqueness for hypergraph C^*-algebras: A Gauge-Invariant Uniqueness Theorem holds for all graph C^*-algebras and all ultragraph C^*-algebras, but it fails to be true in general for hypergraph C^*-algebras (we give a counterexample). However, for a certain subclass of hypergraphs, we may prove a Gauge-Invariant Uniqueness Theorem in Section <ref>. In Section <ref>, we study moves on hypergraphs. Moves on graphs play an important role in the classification of graph C^*-algebras as they preserve the C^*-algebra up to Morita equivalence. We define moves S, R, O and I on hypergraphs and we show that for a subclass of hypergraphs the first three moves yield some weakening of Morita equivalence, whereas the latter one yields isomorphic C^*-algebras. We end with a number of open questions in Section <ref>. In the appendix <ref> and<ref>, we list further examples of hypergraph C^*-algebras, nuclear and non-nuclear ones. § ACKNOWLEDGEMENTS The definition of hypergraph C^*-algebras was a result from discussions between the second author and Simon Schmidt in 2018/2019. It was then conveyed to the third author within his Bachelor's thesis project <cit.> in 2021 and this article contains some of its main results (mainly Sections 3.1–3.4 and 4.1). Subsequently, the first author extended the theory in her Master's project <cit.> in 2022 (mainly Sections 3.5–3.7, 4.2, and 5–6). Finally, Björn Schäfer wrote his Master's thesis on nuclearity of hypergraph C^*-algebras <cit.>; the main results can be found in <cit.>. All theses were written under the supervision of the second author, Moritz Weber. The PhD student Nicolas Faroß of Moritz Weber studied quantum symmetries of hypergraph C^*-algebras, see <cit.>. MW has been supported by the SFB-TRR 195, the Heisenberg program of the DFG and a joint OPUS-LAP grant with Adam Skalski. § PRELIMINARIES ON GRAPH C^*-ALGEBRAS §.§ Universal C*-algebras Before speaking about graph C^*-algebras we need to introduce some notations on universal C^*-algebras. See <cit.> for universal C^*-algebras and <cit.> for the general theory of C^*-algebras. Let E={x_i | i∈ I} be a set of elements indexed by some index set I. From these we can construct noncommutative monomials and noncommutative polynomials, by concatenation of letters from E. By adding another set E^*={x^*_i | i∈ I} which is disjoint from E and by defining an involution on E∪ E^* we obtain the free *-algebra P(E) on the generator set E. We can view relations as a subset of polynomials R⊂ P(E). By taking the two sided *-ideal J(R)⊂ P(E) generated by R we define the universal *-algebra A(E/R):=P(E)/J(R) as a quotient space. Recall that a C^*-seminorm on a *-algebra A is given by a map p:A→ [0,∞), such that * p(λ x)=|λ| p(x) and p(x+y)≤ p(x)+p(y) for all x,y∈ A and λ∈ℂ, * p(xy)≤ p(x)p(y) for all x,y∈ A, * and p(x^*x)=p(x)^2 for all x∈ A holds. We put ‖ x‖:=sup{p(x) | p is a C^*-seminorm on A(E| R) } . and if ‖ x‖<∞ for all x∈ A(E| R), then ‖·‖ is a C^*-seminorm. In that case, we define the universal C^*-algebra C^*(E| R) as the completion with respect to the norm ‖·x‖:=‖ x‖, where ·x∈ A(E| R)/{x∈ A(E| R) | ‖ x‖=0} is the equivalence class of x. Hence, we have C^*(E| R):=A(E| R)/{x∈ A(E| R) | ‖ x‖=0}^‖·‖. This is a C^*-algebra, see for instance <cit.>. By abuse of notation we write for the elements in C^*(E| R) just x∈ C^*(E| R). The following lemma provides a useful tool for proving that a universal C^*-algebra actually exists, see <cit.>. Let E={x_i | i∈ I} be a set of generators and R⊂ P(E) be relations. If a constant C exists such that p(x_i)<C holds for all i∈ I and all C^*-seminorms p on A(E| R), then ‖ x‖<∞ holds for all x∈ A(E| R). In that case, we say that the universal C^*-algebra exists. If a universal C^*-algebra is generated only by projections and partial isometries, it exists. If x is a projection, i.e. x=x^*=x^2, then p(x)^2=p(x^*x)=p(x) and hence p(x)≤ 1 for all C^*-seminorms p. If x is a partial isometry, i.e. x^*x is a projection, then p(x)^2=p(x^*x)≤ 1. Hence, we may choose C=1 in the previous lemma. Note that a universal C^*-algebra may exist, but it could still be the case that it is trivial (i.e. zero). Universal C^*-algebras satisfy the so called universal property; this is a tool for obtaining *-homomorphisms between C^*-algebras: Let E={x_i | i∈ I} be a set of generators and R⊂ P(E) be relations. We say that elements {y_i | i∈ I} in some *-algebra B satisfy the relations R, if every polynomial p∈ R vanishes, when we replace x_i with y_i. In that case, there exists a unique *-homomorphism ϕ:C^*(E| R)→ B, sending x_i to y_i for all i∈ I. §.§ Graph C*-algebras In this section we recall the definition of graph C^*-algebras. For more details see <cit.>. A directed graph Γ=(V,E,r,s) consists of two countable sets V, E and functions r:E→ V, s:E→ V. The elements of V are called vertices and the elements of E are called edges. The map r is named range map and the map s is named source map. We say that v∈ V is a sink iff the set s^-1(v) is empty and we call v a source iff r^-1(v) is empty. Throughout this paper we always restrict to finite directed graphs unless specified otherwise. Let Γ=(V,E,r,s) be a (finite) graph. The graph C^*-algebra C^*(Γ) of the graph Γ is the universal C^*-algebra generated by mutually orthogonal projections p_v for all v∈ V and partial isometries s_e for all e∈ E such that the following relations hold: (CK1) s_e^*s_f=δ_efp_r(e) for all e,f∈ E. (CK2) p_v=∑_e∈ E s(e)=vs_es_e^* for all v∈ V, in case that v is no sink. The relations are called Cuntz-Krieger relationsCuntz-Krieger relations. Elements {S_e, P_v | e ∈ E, v ∈ V} in a C^*-algebra A fulfilling the relations are called Cuntz-Krieger Γ-family. Graph C^*-algebras exist (as universal C^*-algebras) due to Lemma <ref>. From Relation (CK1) it immediately follows that s_e=s_ep_r(e) and s_e=p_s(e)s_e hold for all e∈ E. § DEFINITION AND EXAMPLES OF HYPERGRAPH C^*-ALGEBRAS In this section, we associate a C^*-algebra to a finite hypergraph; later in this section, we will also give a definition for an infinite hypergraph. We study some first properties and we give some first examples. We establish their relation to graph C^*-algebras and to ultragraph C^*-algebras. §.§ Definition of hypergraph C*-algebras The main idea of hypergraphs is to extend the source and range map to power sets, such that edges can connect sets of vertices instead of just single vertices. We define directed hypergraphs as follows, cf also <cit.>. A directed hypergraph HΓ=(V,E,r,s) consists of two countable sets V and E and two maps r,s:E→𝒫(V)\{∅}, where 𝒫(V) denotes the powerset of V. The set V contains vertices, while the set E contains hyperedges. We call a vertex v ∈ V a sourceSource!Hypergraph iff v ∉ r(e) for all e ∈ E and we call it a sinkSink!Hypergraph iff v ∉ s(e) for all e ∈ E. In this article, all hypergraphs are finite, i.e. the set of vertices and edges are both finite, unless explicitly stated otherwise. Recall that for two projections p,q the relation p≤ q holds if and only if pq=p=qp holds. The finite sum of projections is a projection if and only if the projections are mutually orthogonal. Moreover, if p_i≤ q for all i∈ I and q≤∑ p_i for a projection q and finitely many mutually orthogonal projections p_i, we infer q=∑ p_i. Let HΓ=(V,E,r,s) be a (finite) hypergraph. The hypergraph C^*-algebra C^*(HΓ) is the universal C^*-algebra generated by mutually orthogonal projections p_v for all v∈ V and partial isometries s_e for all e∈ E such that the following relations hold: (HR1) s_e^*s_f=δ_ef∑_v∈ r(e)p_v for all e,f∈ E. (HR2a) s_es_e^*≤∑_v∈ s(e)p_v for all e∈ E. (HR2b) p_v ≤∑_e∈ E v∈ s(e)s_es_e^* for all v ∈ V, where v is no sink. We call these relations the hypergraph relations. Elements {S_e, P_v} in a C^*-algebra A fulfilling the hypergraph relations are called Cuntz-Krieger HΓ-family. Hypergraph C^*-algebras exist (as universal C^*-algebra) by Lemma <ref>. §.§ Hypergraph C^*-algebras generalize graph C^*-algebras Note that every graph Γ=(V,E,r,s) is also a hypergraph HΓ=(V,E,r',s') by defining r':E→𝒫(V), e↦{r(e)} and s':E→𝒫(V),e↦{s(e)}. In other words, graphs are hypergraphs with the restriction | s(e)|=| r(e)|=1 for all edges e∈ E. We show that every graph C^*-algebra is also a hypergraph C^*-algebra. Consider a graph Γ=(V,E,r,s) and interpret it as a hypergraph HΓ=(V,E,r',s') in the sense as described before. For our graph C^*-algebra we write C^*(Γ)=C^*(s̃_e, e∈ E ; p̃_v, v∈ V | p̃_vp̃_w=0, v≠w; s̃_e^*s̃_f=δ_efp̃_r(e) ; ∑_e∈ E s(e)=ws̃_es̃_e^*=p̃_w) where s̃_e is a partial isometry for all e∈ E and p̃_v is a projection for all v∈ V. Then we have C^*(Γ)≅ C^*(HΓ). First, we check that the generators of C^*(Γ) fulfill the relations of C^*(HΓ). Since the only element in the set r'(e) is the vertex r(e) we have s̃_e^*s̃_f=δ_efp̃_r(e)=δ_ef∑_v∈ V v∈ r'(e)p̃_v. We see that Relation (HR1) is fulfilled. For the same reasons it follows for v∈ V with s^-1(v)≠∅ that p̃_v =∑_e∈ E v=s(e)s̃_es̃_e^* =∑_e∈ E v∈ s'(e)s̃_es̃_e^*. Hence, Relation (HR2b) is satisfied, while (HR2a) follows from Remark <ref>. Conversely, the generators of C^*(HΓ) satisfy the Relations (CK1) and (CK2) of C^*(Γ): Using the same argument as in the above direction, we have s_e^*s_f=δ_ef∑_v∈ V v∈ r'(e)p_v=δ_efp_r(e) . We see that Relation (CK1) is satisfied. To show that (CK2) is fulfilled we need Relations (HR2a) and (HR2b). Let v∈ V with s^-1(v)≠∅ and hence there exists at least one f∈ E with s(f)=v. With (HR2a) it follows s_es_e^*≤∑_v∈ V v∈ s'(e) p_v=p_s(e) and using Relation (HR2b) we have p_v≤∑_e∈ E v∈ s'(e) s_es_e^*= ∑_e∈ E v= s(e) s_es_e^*. By Remark <ref> we conclude p_v =∑_e∈ E v=s(e)s_es_e^*. By the universal properties of C^*(Γ) and C^*(HΓ) we find an isomorphism mapping s̃_e to s_e and p̃_v to p_v. §.§ Hypergraph C^*-algebras are unital The next statement is known for graph C^*-algebras (associated to finite graphs), but it is still true for the generalization to hypergraph C^*-algebras. Recall that all our hypergraphs are finite. For every hypergraph HΓ=(V,E,r,s) and hypergraph C*-algebra C^*(HΓ) we have that ∑_v∈ V p_v is the unit element in C^*(HΓ) and therefore, ∑_v∈ V p_v=1. Using Relation (HR1), we have s_e∑_v∈ Vp_v =s_es_e^*s_e∑_v∈ Vp_v =s_e∑_v∈ r(e)p_v∑_v∈ Vp_v =s_e∑_v∈ r(e)p_v =s_e . Using the same trick as in the argument before but with Relation (HR2a), we also have (∑_v∈ Vp_v)s_e=s_e. Notice that we have ∑_v∈ Vp_vp_w=p_w=p_w∑_v∈ Vp_v for all w∈ V and (∑_v∈ Vp_v)^2=∑_v∈ Vp_v=(∑_v∈ Vp_v)^*. We conclude that ∑_v∈ Vp_v is the unit element in C^*(HΓ). Next, we show that the analogous of Remark <ref> holds true for hypergraph C^*-algebras. Let HΓ=(V,E, r, s) be a hypergraph. For each Cuntz-Krieger HΓ-family {p_v, s_e } it holds (∑_v∈ s(e) p_v) s_e= s_e = s_e(∑_v∈ r(e) p_v). In particular, s_es_f^*=0, if r(e)∩ r(f)=∅. By (HR2a), we have (∑_v∈ s(e) p_v)s_e= (∑_v∈ s(e) p_v)s_es_e^*s_e=s_es_e^*s_e=s_e and by (HR1), we have s_e(∑_v∈ r(e) p_v)=s_es_e^*s_e=s_e. §.§ First examples In this section, we list a number of first examples. Further, more interesting ones may be found in Section 4. We can express the Toeplitz algebra as a hypergraph C^*-algebra. We can even consider two different hypergraphs as we see in the following example. Consider the hypergraphs HΓ_1 given by V_1={v,w}, E_1={e} and s(e)={w}, r(e)={v,w}, as well as HΓ_2 given by V_2={v,w}, E_2={e} and s(e)={v,w}, r(e)={w}, depicted in Fig. <ref>. Both associated C^*-algebras are isomorphic to the Toeplitz algebra 𝒯. For the first hypergraph, HΓ_1, we get by the hypergraph relations s_e^*s_e=p_v+p_w, s_es_e^*=p_w (as s_es_e^*≤ p_w≤ s_es_e^*). Since p_v+p_w=1 by Lemma <ref>, s_e is an isometry. By the universal property of the Toeplitz algebra, which we view as the universal C^*-algebra generated by an isometry u, we get a *-homomorphism ϕ:𝒯→ C^*(HΓ_1) defined by u ↦ s_e, 1 ↦ p_v+p_w=s_e^*s_e. On the other hand we can define a Cuntz-Krieger HΓ_1-family in 𝒯 as follows S_e:=u, P_w:=uu^*, P_v:=1-uu^*. The universal property then yields a *-homomorphism ψ:C^*(HΓ_1)→𝒯 defined by s_e↦ S_e, p_w ↦ P_w, p_v ↦ P_v. We conclude that ϕ∘ψ=𝕀_C^*(HΓ_1) and ψ∘ϕ =𝕀_𝒯. Thus we get the required isomorphism. The proof for the second hypergraph follows similarly, using that s_e^* is an isometry as s_es_e^*=p_v+p_w=1. Another basic example of a hypergraph C^*-algebra is the Cuntz algebra 𝒪_n <cit.>. Let n∈ℕ and n≥ 2. Consider the hypergraph HΓ with vertices V={v_1,...,v_n}, edges E={e_1,...,e_n} and r(e_i)={v_1,...,v_n}, s(e_i)={v_i} for all i=1,...,n. Then C^*(HΓ)≅𝒪_n. By the relations of the hypergraph C^*-algebra C^*(HΓ) we have s_e_i^*s_e_j=δ_ij∑_j=1^n p_v_j=1 for all i,j=1,...,n, using Lemma <ref>, and s_e_is_e_i^*≤ p_v_i≤ s_e_is_e_i^*. Hence, the isometries s_e_i satisfy ∑_i=1^ns_e_is_e_i^*=∑_i=1^n p_v_i=1 and we obtain ϕ:𝒪_n→ C^*(HΓ) by the universal property. We obtain an inverse *-homomorphism, since the generators S_e_i:=S_i∈𝒪_n and the projections P_v:=S_iS_i^* satisfy the relations of C^*(HΓ). §.§ Paths in hypergraph C^*-algebras Graph C^*-algebras are spanned by words in paths. This cannot be generalized directly, as paths in hypergraphs are of different quality – it is a priori unclear how to define them and we give various definitions. Let μ=(μ_1...μ_n) be a tuple of edges in HΓ. Then we call μ * a perfect pathPath!Perfect, if s(μ_j+1)=r(μ_j) for all j∈{1,...,n}; * a quasi perfect pathPath!Quasi perfect, if s(μ_j+1)⊆ r(μ_j) for all j∈{1,...,n}; * a partial pathPath!Partial, if s(μ_j+1)∩ r(μ_j)≠∅ for all j∈{1,...,n}. We define s_μ:=s_μ_1...s_μ_n and s_v:=p_v. Restricting to a particular nice class of paths, we may deduce a result in analogy to graph C^*-algebras. Let HΓ=(V, E, r, s) be a hypergraph with only perfect and quasi perfect paths and a Cuntz-Krieger HΓ-family {p_v, s_e}. It holds: C^*(HΓ)=span{s_μs_ν^* | μ=(μ_1...μ_k), ν=(ν_1...ν_m), μ_i,ν_j∈ V∪ E for all i,j, and r(μ)∩ r(ν)≠∅}. Let μ=(μ_1,μ_2) be a quasi perfect path. Then (∑_v∈ r(μ_1)p_v)≥(∑_v∈ s(μ_2)p_v) and we have (∑_v∈ r(μ_1)p_v)s_μ_2= (∑_v∈ r(μ_1)p_v)(∑_v∈ s(μ_2)p_v)s_μ_2=s_μ_2 by Proposition <ref>. Hence, if ν=(ν_1,ν_2) is another quasi perfect path, then s_ν^*s_μ=s_ν_2^*s_ν_1^*s_μ_1s_μ_2=δ_ν_1μ_1s_ν_2^*(∑_v∈ r(μ_1)p_v)s_μ_2=δ_ν_1μ_1s_ν_2^*s_μ_2=δ_ν_1μ_1δ_ν_2μ_2(∑_v∈ r(μ_1)p_v). An iteration yields for quasi perfect paths μ, ν, α, β: (s_μ s_ν^*)(s_α s_β^*)= s_μα's_β^* α = να' s_μ s_βν'^* ν=αν' ∑_v∈ Is_μν s_β^* ν =α, for some index set I 0 else. For a (quasi) perfect path μ = (μ_1,…, μ_n) the element s_μ is always a partial isometry. This is not the case if we deal with partial paths: The projections onto vertices then no longer disappear in chains of partial isometries. §.§ (Infinite) hypergraph C^*-algebras generalize ultragraph C*-algebras Ultragraph C*-algebras, first defined by <cit.>, form a subclass of hypergraph C*-algebras, as we will show in this subsection. The main intention of Tomforde's construction was to find a unified approach to graph C^*-algebras and Exel-Laca algebras generalizing Cuntz-Krieger algebras to the infinite setting. We adapt the definition in order to define hypergraph C^*-algebras for infinite hypergraphs. A (directed) ultragraph UΓ=(V,E,r,s) is defined by a countable set of vertices V and a countable set E of edges together with a source map s:E→ V and a range map r:E→𝒫(V)\{∅}. Hence, an ultragraph is a (not necessarily finite) hypergraph with the condition | s(e)|=1 for all e∈ E. To clarify the differences between graphs, ultragraphs and hypergraphs, we illustrate the possible edges in graphs, ultragraphs and hypergraphs in Figure <ref>. Let HΓ=(V,E,r,s) be a hypergraph. Let 𝒱_0:={{x}| x∈ V}∪{s(e), r(e) | e ∈ E}⊂𝒫(V) and let 𝒱 be the smallest subcollection of 𝒫(V) containing 𝒱_0 which is closed under finite unions and finite intersections. We call the sets A ∈𝒱 generalized verticesGeneralized vertices. With this notation we can give a slightly different definition of a Cuntz-Krieger family involving the generalized vertices. This definition is adapted from <cit.>. Let HΓ=(V, E, r, s) be a (not necessarily finite) hypergraph. A generalized Cuntz-Krieger HΓ-familyCuntz-Krieger family!Generalized is a collection of partial isometries {s_e | e ∈ E} and orthogonal projections {p_A | A ∈𝒱} such that (GR0) p_∅=0, p_Ap_B=p_A ∩ B and p_A ∪ B=p_A+p_B-p_A ∩ B for all A,B ∈𝒱; (GR1) s_e^*s_f=δ_efp_r(e) for all e,f ∈ E; (GR2a) s_es_e^*≤ p_s(e) for all e ∈ E; (GR2b) p_{v}≤∑_e ∈ E, v ∈ s(e)s_es_e^* for all v ∈ V which emit at least one and at most finitely many edges (i.e. ∅≠ s(e) and | s(e)|<∞). The universal C^*-algebra C^*(HΓ) generated by these generators and relations is called a hypergraph C^*-algebra. For Definition <ref>, the clue is that this definition also makes sense for infinite hypergraphs. In the infinite case, we have to ensure that all sums of projections are well defined. Thus, all infinite sums must be avoided. Relation (GR2b) is restricted accordingly. But for edges with infinitely many vertices in their source or range, we have to adjust the other hypergraph relations as well, due to the possibly infinite sums ∑_v ∈ r(e)p_v and ∑_v ∈ s(e)p_v. The approach involving generalized vertices forces the existence of the required projections in a natural way. And it turns out that in the finite case, both definitions still coincide. Let HΓ be a finite hypergraph. Then the Cuntz-Krieger families in Definition <ref> and Definition <ref> generate the same C^*-algebra. Given a Cuntz-Krieger family as in Def. <ref> of projections p_v, v∈ V and partial isometries s_e, e∈ E, the sums p_A:=∑_v ∈ Ap_v are projections for all generalized vertices A ∈𝒱 and we obtain a generalized Cuntz-Krieger HΓ-family as in Def. <ref>. Conversely, in a generalized Cuntz-Krieger HΓ-family as in Def. <ref>, the projections p_v:=p_{v}, v∈ V and the partial isometries s_e, e∈ E form a Cuntz-Krieger HΓ-family as in Def. <ref>. If HΓ is an ultragraph, then C^*(HΓ) is an ultragraph C^*-algebra in Tomforde's sense <cit.> and vice versa. This class is nuclear, see <cit.>. Any ultragraph C^*-algebra is Morita equivalent to a graph C^*-algebra. In particular, all ultragraph C^*-algebras are nuclear. In Lemma <ref> we showed, that hypergraph C^*-algebras are always unital, if the underlying hypergraph is finite. This fails to be true for an infinite number of vertices, as an infinite sum of projections cannot converge in norm. However, as in <cit.>, one can show that C^*(HΓ) is unital if and only if V ⊂{⋃_i=1^n(⋂_e∈ X_ir(e)) ∪⋃_i=1^m(⋂_e∈ Y_is(e))∪ F | X_i, Y_i ⊆ E finite, F⊆ V finite}. The proof of this follows in exactly the same way as in <cit.>, except for the construction of the approximate unit. §.§ Decomposition of ranges Multiple hypergraphs can have a similar corresponding C^*-algebra. By decomposing the range of an edge, we give a concrete way to construct new hypergraphs while leaving the corresponding C^*-algebra invariant. Also, this gives us some information about the relation between graph and ultragraph C^*-algebras and it shows the crucial differences to hypergraphs. The idea of the proof is based on <cit.>, where it is shown that each (infinite) ultragraph C^*-algebra is Morita equivalent to a graph algebra. Decomposition of ranges Let HΓ=(V,E,r,s) be a finite hypergraph. Define the hypergraph H̃Γ̃=(Ṽ,Ẽ,r̃,s̃) as Ṽ:=V, Ẽ:={(e,v) | e ∈ E, v ∈ r(e)}, r̃((e,v)):=v, s̃((e,v)):=s(e). The corresponding hypergraph C^*-algebras are isomorphic, i.e. C^*(HΓ)≅ C^*(H̃Γ̃). In particular it holds that r̃:Ẽ→Ṽ (rather than r̃:Ẽ→𝒫(Ṽ)\{∅}), i.e. we have |r̃(ẽ)| =1 for all ẽ∈Ẽ. Let {q_v | v ∈Ṽ}, {t_α | α∈Ẽ} be the universal Cuntz-Krieger H̃Γ̃-family. We define P_v :=q_v ∀ v ∈ V, S_e :=∑_v ∈ r(e)t_(e,v)∀ e ∈ E. The elements {P_v | v ∈ V} are mutually orthogonal projections and a quick calculation shows that {S_e | e ∈ E} are partial isometries with mutually orthogonal ranges. Together they form a Cuntz-Krieger HΓ-family in C^*(H̃Γ̃), as we see in the following. For (HR1) of C^*(HΓ), check that (HR1) of C^*(H̃Γ̃) implies S_e^*S_f=∑_v ∈ r(e)∑_w ∈ r(f)t_(e,v)^*t_(f,w)=δ_ef∑_v ∈ r(e)q_v=δ_ef∑_v ∈ r(e)P_v. For (HR2a) of C^*(HΓ), using that the ranges of (e,w) and (e,z) for distinct vertices w, z are disjoint we get using Proposition <ref> that t_(e,w)t_(e,z)^*=0 for w ≠ z. Since Relation (HR2a) of C^*(H̃Γ̃) implies that t_(e,w)t_(e,w)^*≤∑_v ∈ s(e)q_v for all w ∈ r(e) we get S_eS_e^*=∑_w ∈ r(e)t_(e,w)∑_z ∈ r(e)t_(e,z)^* =∑_w ∈ r(e)t_(e,w)t_(e,w)^* ≤∑_v ∈ s(e)q_v =∑_v ∈ s(e)P_v. For (HR2b) of C^*(HΓ), using (HR2b) of C^*(H̃Γ̃) and the orthogonality of the ranges of (e,w) and (e,z) for distinct vertices w and z, we get P_v =q_v ≤∑_α∈Ẽ, v ∈ s(α)t_α t_α^* =∑_e ∈ E, v ∈ s(e) ∑_w ∈ r(e)t_(e,w) t_(e,w)^* =∑_e ∈ E, v ∈ s(e) ∑_w ∈ r(e)t_(e,w)∑_z ∈ r(e)t_(e,z)^* =∑_e ∈ E, v ∈ s(e)S_eS_e^*. Hence all hypergraph relations are fulfilled and we thus get a *-homomorphism ϕ:C^*(HΓ)→ C^*(H̃Γ̃) which maps the canonical generators s_e ↦ S_e and p_v ↦ P_v. To construct the inverse map we define the elements Q_v :=p_v ∀ v ∈Ṽ, T_(e,v) :=s_ep_v ∀ (e,v) ∈Ẽ. Clearly, the elements Q_v are mutually orthogonal projections and T_(e,v) is a partial isometry for each (e,v) ∈Ẽ, since s_ep_vs_e^* is a projection, using (HR1). We check that these elements form a Cuntz-Krieger H̃Γ̃-family in C^*(HΓ). For (HR1) of C^*(H̃Γ̃), relation (HR1) of C^*(HΓ) yields T_(e,v)^*T_(f,w) =p_vs_e^*s_fp_w =δ_ef p_v(∑_z∈ r(e)p_z)p_w =δ_efδ_vwp_v =δ_(e,v),(f,w)Q_v. For (HR2a) of C^*(H̃Γ̃), using the definition of partial isometries and the order relation of projections we get by applying (HR2a) of C^*(HΓ): T_(e,v)T_(e,v)^* =s_ep_vp_vs_e^* ≤ s_es_e^* ≤∑_w ∈ s(e)p_w =∑_w ∈ s((e,v))Q_w. For (HR2b) of C^*(H̃Γ̃), we get by (HR2b) and (HR1) of C^*(HΓ): Q_v =p_v ≤∑_e ∈ E, v ∈ s(e)s_es_e^* ≤∑_e ∈ E, v ∈ s(e)∑_v ∈ r(e)s_ep_vs_e^* = ∑_e ∈ E, v ∈ s(e)∑_v ∈ r(e)T_(e,v)T_(e,v)^* =∑_α∈Ẽ, v ∈ s(α)T_α T_α^*. The universal property then yields a *-homomorphism ψ:C^*(H̃Γ̃)→ C^*(HΓ) which is inverse to ϕ. Instead of a complete decomposition of the range into its single vertices we could have also disassembled it into a disjoint union of nonempty sets, i.e. r(e)=ℰ_1∪…∪ℰ_n and associate to each set ℰ_j the edge (e,ℰ_j). Let HΓ=(V,E,r,s) be a finite hypergraph. For each e ∈ E, let r(e)=ℰ_1∪…∪ℰ_n_e for nonempty disjoint sets ℰ_j and n_e ∈. Define the hypergraph H̃Γ̃=(Ṽ,Ẽ,r̃,s̃) as Ṽ:=V, Ẽ:={(e,ℰ_j) | e ∈ E, j=1,…, n_e}, r̃((e,ℰ_j)):=ℰ_j, s̃((e,ℰ_j))):=s(e). The corresponding hypergraph algebras are isomorphic, i.e. C^*(HΓ)≅ C^*(H̃Γ̃). Similar to the proof of Proposition <ref>. We only get the decomposition for ranges – the same approach for sources is not possible. For example the element p_vs_e for a vertex v ∈ s(e) is in general no partial isometry. Also, if it was possible to decompose ranges and sources, we would obtain a graph Γ̃ such that C^*(HΓ)≅ C^*(Γ̃) given any hypergraph Γ contradicting the existence of non-nuclear hypergraph C^*-algebras (see the next section). The C^*-algebra of a finite ultragraph is isomorphic to a graph C^*-algebra. Use the decomposition of ranges in order to obtain a graph Γ̃ with C^*(HΓ)≅ C^*(Γ̃). We can reverse the above construction and merge edges with similar sources and disjoint ranges. We state the corollary in case of two edges, but it can directly be generalized to any finite number of edges by iteration. Let HΓ=(V,E,r,s) be a finite hypergraph. Consider e,f ∈ E with s(e)=s(f) and r(e)∩ r(f)=∅. The hypergraph H̃Γ̃ given by Ṽ:=V, Ẽ:=(E∖{e,f})∪{g}, s̃(h):=s(h) ∀ h ∈ E∖{e,f}, s̃(g):=s(e), r̃(h):=r(h) ∀ h ∈ E∖{e,f}, r̃(g):=r(e)∪ r(f) generates an isomorphic hypergraph C^*-algebra. We apply Proposition <ref> to HΓ and H̃Γ̃. Both yield the same hypergraph, which gives the required isomorphism. § NON-NUCLEAR HYPERGRAPH C^*-ALGEBRAS In this section, we show that the class of hypergraph C^*-algebras is strictly larger than the class of graph C^*-algebras. All graph C^*-algebras are nuclear and even all ultragraph C^*-algebras are nuclear as each ultragraph C^*-algebra is Morita equivalent to a graph C^*-algebra, see Prop. <ref>. For hypergraph C^*-algebras this is not the case as we will see in the following. In analogy to the group case, we introduce the following notion. We call a hypergraph amenable, if the corresponding hypergraph C^*-algebra is nuclear. §.§ Basic examples of non-amenable hypergraphs In the following, we view the algebra C(S^1) of continuous functions on the circle as the universal C^*-algebra C(S^1)≅ C^*(u,1| uu^*=1=u^*u) and ℂ^n≅ C^*(p_i, i=1,...,n | p_i^*=p_i=p_i^2 ; ∑_i=1^np_i=1). The unital (full) free product of these two C^*-algebras is then given by the universal C^*-algebra generated by a unitary u and n projections p_1, …,p_n summing up to one; the units are identified. Note that we have C(S^1)*ℂ^n≅ C^*(ℤ)*C^*(ℤ/nℤ)≅ C^*(ℤ * (ℤ/nℤ)), and thus, this C^*-algebra is not nuclear (<cit.>) as long as n≥ 2, since ℤ * (ℤ/nℤ) is not amenable; the latter group contains the free group on two generators, see <cit.>, which is non-amenable, and we use that closed subgroups of amenable, compact groups are amenable, <cit.>. Let n∈ℕ and consider the hypergraph H̃Γ̃_̃ñ with vertices {v_1,…,v_n}, edges {e_1} and r(e_1)={v_1,…,v_n}, s(e_1)={v_1,…,v_n}. We have C^*(H̃Γ̃_̃ñ)≅ C(S^1)*_ℂℂ^n. In particular, C^*(H̃Γ̃_̃ñ) is not nuclear and H̃Γ̃_n is non-amenable as soon as n≥ 2. For the hypergraph C^*-algebra C^*(H̃Γ̃_̃ñ) we obtain s_e_1^*s_e_1=∑_i=1^n p_v_i=1 by Relation (HR1) and Prop. <ref>. Further, by Relations (HR2a) and (HR2b) we have s_e_1s_e_1^*≤∑_i=1^n p_v_i=1 and s_e_1s_e_1^*≥ p_v_i for all i=1,...,n. This implies s_e_1s_e_1^*=∑_i=1^np_v_i=1. Hence, s_e_1 is a unitary and we obtain a *-homomorphism ϕ: C(S^1)*_ℂℂ^n→ C^*(H̃Γ̃_̃ñ), sending u to s_e_1 and p_i to p_v_i for all i=1,…,n. It is an isomorphism, since conversely u, p_1, …, p_n∈ C(S^1)*ℂ^n satisfy the relations of C^*(H̃Γ̃_̃ñ). We now extend the above example to a hypergraph with m≥ 2 edges. Recall the definition of the Cuntz algebra 𝒪_m=C^*(S_1,…,S_m | S_i^*S_i=1 for all i=1,…,m; ∑_i=1^m S_iS_i^*=1), see <cit.>. Let n,m∈ℕ with m≥ 2. Consider the hypergraph HΓ with vertices {v_1,...,v_n}, edges {e_1,...,e_m} and r(e_i)={v_1,...,v_n}, s(e_i)={v_1,...,v_n} for all i=1,...,m. We have C^*(HΓ)≅𝒪_m*_ℂℂ^n and hence HΓ is non-amenable, if n≥ 2. Using the relations of the associated hypergraph C^*-algebra C^*(HΓ), we obtain s_e_i^*s_e_j=δ_ij∑_k=1^n p_v_k=δ_ij for all i,j=1,...,m s_e_is_e_i^*≤∑_j=1^n p_v_j=1 for all i=1,...,m p_v_i≤∑_j=1^ms_e_js_e_j^* for all i=1,...,n. Hence, the elements s_e_i are isometries with 1=∑ p_v_i≤∑_j s_e_js_e_j^*≤ 1 and we obtain a *-homomorphism ϕ:𝒪_m*_ℂℂ^n→ C^*(HΓ) that sends S_i to s_e_i for all i=1,...,m and p_i to p_v_i for all i=1,...,n. Conversely, the Relations (HR1)-(HR2b) are satisfied in 𝒪_m*_ℂℂ^n, and we obtain a *-homomorphism that is inverse to ϕ. §.§ Construction of non-amenable hypergraphs We can now use the above non-amenable hypergraph H̃Γ̃_n from Prop. <ref> to construct further non-amenable hypergraphs. This can be achieved by extending the hypergraph H̃Γ̃_n appropriately. The canonical generators of C^*(H̃Γ̃_n) are in the following denoted by {t_f}∪{q_v_1,…, q_v_n}. The crucial idea is then to use that nuclearity transfers to quotients <cit.>. Let n≥ 3. Let HΓ be the hypergraph defined by V={w,v_1,…,v_n} and E={e,f} with s(e) ={w}, r(e)={v_n}, s(f) ={v_1,…, v_n}, r(f)={v_1,…, v_n}. Then HΓ is non-amenable. A Cuntz-Krieger HΓ-family in C^*(H̃Γ̃_n-1) is defined by P_w :=0, P_v_n:=0, P_v_i:= q_v_i for i≤ n-1, S_e:=0, S_f:=t_f, which follows by straightforward calculations. This leads to a surjective *-homomorphism ϕ:C^*(HΓ) → C^*(H̃Γ̃_n-1) By Proposition <ref> we know that the C^*-algebra C^*(H̃Γ̃_n-1) is non nuclear. Thus, C^*(HΓ) has a non-nuclear quotient and is thus also non-nuclear. There are multiple ways to create further non-amenable hypergraphs with the above technique. We could add multiple vertices to the source/range of e, attach more edges to the hypergraph, and so on. The main idea is always to set the partial isometry corresponding to the new edge equal to zero and use the hypergraph relations to determine which projections must be zero. Then we consider the hypergraph H̃Γ̃_m with edge f, were we delete all vertices, whose projection is zero, from the range and source. The resulting Cuntz-Krieger family leads to a surjective *-homomorphism onto a non-nuclear C^*-algebra C^*(H̃Γ̃_m). We add further such examples in Appendix <ref>. Nevertheless, the above illustration is somewhat misleading. The examples originate from manipulations of the non-amenable hypergraph. Thus, at first glance, the non-amenable subhypergraph H̃Γ̃_n seems to be crucial. But in fact the quotient given by C^*(H̃Γ̃_m) for some m <n is crucial, as highlighted in the following figure: To get a better graphical understanding we express the above technique by concrete requirements on the hypergraph. The idea is to extract a non-amenable part of the hypergraph with slight modifications on the edges by deleting vertices from its source and range. This gives an easy way to check non-amenability for a given hypergraph without using the corresponding C^*-algebra. Let HΓ=(V, E, r,s) be a finite hypergraph. Assume there exist Ṽ⊆ V and Ẽ⊆ E such that Ṽ∩ r(e)≠∅ and Ṽ∩ s(e)≠∅ holds if and only if e∈Ẽ. Let H̃Γ̃=(Ṽ, Ẽ, r̃, s̃) be the hypergraph with r̃(e):=r(e)∩Ṽ, s̃(e):=s(e)∩Ṽ. Then C^*(H̃Γ̃) is a quotient of C^*(HΓ). In particular, if H̃Γ̃ is non-amenable, then HΓ is non-amenable, since nuclearity passes to quotients. We can define a Cuntz-Krieger HΓ-family in C^*(H̃Γ̃) by P_v := q_v for v ∈Ṽ 0 for v ∈ V∖Ṽ, S_e := t_e for e ∈Ẽ 0 for e ∈ E∖Ẽ. Since all edges whose sources and sinks intersect with Ṽ lie in Ẽ, the hypergraph relations of the Cuntz-Krieger family follow directly from the hypergraph relations in C^*(H̃Γ̃). Indeed, as the projections and partial isometries corresponding to vertices and edges not in H̃Γ̃ are 0, they can be added without changing any relations. By the universal property we obtain a *-homomorphism from C^*(HΓ) onto C^*(H̃Γ̃) which is surjective as all generators of C^*(H̃Γ̃) lie in its range. The previous constructions have the objective to check if a given hypergraphs is non-amenable by deleting and manipulating edges and vertices. The emerging question is now, how to attach a non-amenable hypergraph to an arbitrary hypergraph to receive a non-amenable hypergraph. The technique below defines some kind of product between two hypergraph C^*-algebras. Let HΓ=(V_Γ,E_Γ,r_Γ,s_Γ) and HΔ=(V_Δ,E_Δ,r_Δ,s_Δ) be finite hypergraphs. For fixed f ∈ E_Γ and w ∈ V_Δ let the hypergraph HΘ be given by V_Θ=V_Γ∪ V_Δ, E_Θ=E_Γ∪ E_Δ, r_Θ(e)= r_Γ(e) for e ∈ E_Γ r_Δ(e) for e ∈ E_Δ, s_Θ(e):= s_Γ(e) for e ∈ E_Γ∖{f} s_Γ(f)∪{w} for e=f r_Δ(e) for e ∈ E_Δ. Then C^*(HΓ) is a quotient of C^*(HΘ). In particular, if HΓ is non-amenable, then HΘ is non-amenable. We define a Cuntz-Krieger HΘ-family in C^*(HΓ) by letting all projections and partial isometries corresponding to vertices and edges in HΔ be zero and by identifying the elements corresponding to vertices and edges in HΓ with the generators of C^*(HΓ). The clue is, that by letting all elements corresponding to C^*(HΔ) be zero, we "delete" the new vertex in the source and obtain HΓ. We denote the elements in the constructed Cuntz-Krieger HΘ-family by T_e and Q_v. This is indeed a Cuntz-Krieger HΘ-family. The crucial part is the linking edge f and the vertex w. If we consider the hypergraph relations for these we get: T_f^*T_f =s_f^*s_f=p_r_Γ(f)=p_r_Θ(f)=Q_r_Θ(f), T_fT_f^* =s_fs_f^*≤ p_s_Γ(f)=p_s_Γ(f)+0=Q_s_Γ(f)+Q_w=Q_s_Θ(f), Q_w =0≤∑_e ∈ E_Θ, w ∈ s_Θ(e)T_eT_e^*. The *-homomorphism given by the universal property is clearly surjective, as all generators of C^*(HΓ) are in the range. In the previous proposition, we added the vertex w to the source of the edge f. Similarly we could have also added the vertex w to the range of f. In either cases the quotient deletes the further vertex in the source/range. Furthermore, we must not restrict ourselves to a single connection. Using the same idea of the proof we could extend to multiple linking edges and multiple new vertices in their sources/ranges. § GAUGE-INVARIANT UNIQUENESS FOR HYPERGRAPH C^*-ALGEBRAS We now take a look at the Gauge-Invariant Uniqueness Theorem, which – for graph C^*-algebras – yields faithful representations. For this class, it is one of the most important theorems and it holds for all graph C^*-algebras <cit.>[Thm. 2.1] and all ultragraph C^*-algebras <cit.>[Prop. 5.5]. However, this is not the case for hypergraph C^*-algebras, as we will see in this section. Nevertheless, under specific assumptions on the hypergraph, we can extend it, borrowing the Gauge-Invariant Uniqueness Theorem from graph C^*-algebras. Let us first introduce gauge actions. Let HΓ=(V, E, r,s) be a finite hypergraph with universal Cuntz-Krieger HΓ-family {s_e,p_v}. Then there exists a continuous action of on C^*(HΓ) such that γ_z(s_e)=zs_e ∀ e ∈ E, γ_z(p_v)=p_v ∀ v∈ V. The action is called gauge actionGauge! action. The proof is analogous to the one for graphs <cit.>: One can immediately see, that the elements zs_e and p_v satisfy the Cuntz-Krieger relations for hypergraphs, which by the universal property yields the *-isomorphisms γ_z:C^*(HΓ)→ C^*(HΓ) with γ_w∘γ_z=γ_wz. One is left to check continuity as in <cit.>. Now, for graph C^*-algebras, the Gauge-Invariant Uniqueness Theorem can be formulated as follows. Let Γ=(V,E,r,s) be a finite graph and let π:C^*(Γ)→ B be a *-homomorphism, where B is some C^*-algebra. If there exists an action β:→ Aut(B) such that β_z∘π=π∘γ_z, for all z∈, and if π(p_v)≠ 0 for all v∈ V, then π is injective. This theorem does not apply to hypergraphs in general, as we can see in the next example. Consider the hypergraph H̃Γ̃_n from Prop. <ref>. Its corresponding hypergraph C^*-algebra is given by C^*(H̃Γ̃_n)≅ C(S^1)*^n. We thus obtain a surjective but non-injective *-homomorphism π: C^*(H̃Γ̃_n)≅ C(S^1)*^n → C(S^1)⊗^n. The tensor product C(S^1)⊗^n has a gauge action β with the property, that β∘π=π∘γ, where γ is the gauge action on C^*(H̃Γ̃_n). Thus, the Gauge-Inivariant Uniqueness Theorem does not hold for this hypergraph C^*-algebra. We will now show that under specific conditions, a hypergraph C^*-algebra is isomorphic to the C^*-algebra of its dual graph. For graphs, this is true in general, see <cit.>: The C^*-algebra of a graph is isomorphic to the C^*-algebra of its dual graph, if the graph has no sinks. Let HΓ=(V, E, r, s) be a finite hypergraph. The dual graphDual Graph Γ̃ is defined as Ṽ:={e | e ∈ E}, Ẽ:={(e,f) | e,f∈ E, s(f)∩ r(e)≠∅}, s̃((e,f)):=e, r̃((e,f)):=f. The dual graph is actually a graph, not just a hypergraph. Let HΓ=(V, E, r,s) be a finite hypergraph with only quasi perfect paths and no sinks. Then it holds for all e ∈ E p_r(e)=∑_f∈ E, s(f)⊆ r(e)s_fs_f^*. By definition of quasi perfect paths, s(f)∩ r(e)≠∅ implies s(f) ⊆ r(e) for all e,f ∈ E. The statement follows when combining (HR2a) and (HR2b). Let HΓ=(V, E, r ,s) be a finite hypergraph with only quasi perfect paths and no sinks, Γ̃ be its dual graph. Then C^*(Γ̃) is isomorphic to the C^*-subalgebra of C^*(HΓ) generated by {s_e | e∈ E}. We define a Cuntz-Krieger Γ̃-family in C^*(HΓ) by Q_e :=s_es_e^*∀ e ∈Ṽ, T_(e,f) :=s_eQ_f ∀ (e,f) ∈Ẽ. To check that this is a Cuntz-Krieger Γ̃-family, a short calculation using the hypergraph relations of C^*(HΓ) shows, that the elements Q_e are mutually orthogonal projections and the elements T_(e,f) are partial isometries. It remains to check the Cuntz-Krieger relations for graphs, since the dual graph is a graph. Since the hypergraph has only quasi perfect paths, all paths ef fulfill s(f)⊆ r(e) which implies by Lemma <ref> and the definition of Q_f, that Q_fp_r(e)=Q_f. With this we get the first Cuntz-Krieger relation: T_(e,f)^*T_(g,h) =(s_eQ_f)^*(s_gQ_h) =Q_fs_e^*s_gQ_h =δ_e,g Q_fp_r(e)Q_h =δ_e,g Q_fQ_h =δ_e,gδ_f,hQ_f =δ_(e,f),(g,h) Q_r((e,f)). For the second Cuntz-Krieger relation we need that for quasi perfect paths with no sinks, Lemma <ref> applies, and we get: Q_e =s_es_e^* =s_ep_r(e)s_e^* =s_e ( ∑_f∈ E, s(f)⊆ r(e)s_fs_f^*)s_e^* =s_e ( ∑_f∈ E, s(f)⊆ r(e)Q_f)s_e^* =∑_f∈ E, s(f)⊆ r(e)s_eQ_fs_e^* =∑_f∈ E, s(f)⊆ r(e)T_(e,f)T_(e,f)^* =∑_x∈Ẽ, s(x)=eT_xT_x^*. Thus, by the universal property, we get a canonical *-homomorphism π:C^*(Γ̃) → C^*(HΓ) defined by q_e ↦ Q_e and t_(e,f)↦ T_(e,f). Since the dual graph is really a graph, we can use the Gauge-Invariant Uniqueness Theorem for graph C^*-algebras, Thm. <ref>. Let γ and γ̃ be the gauge actions on C^*(HΓ) and C^*(Γ̃) respectively. Then a short calculation shows, that π∘γ̃_z=γ_z ∘π for all z ∈. Thus the requirements for the Gauge-Invariant Uniqueness Theorem are given (note that all projections are non-zero) and the *-homomorphism π is injective. By definition of the Cuntz-Krieger family {T_(e,f), Q_f} we know that Im(π)⊆ C^*(s_e | e∈ E). Using again Lemma <ref> we get s_e=s_ep_r(e)=s_e∑_f∈ E, s(f)⊆ r(e)s_fs_f^*=∑_f∈ E, s(f)⊆ r(e)s_eQ_f=∑_f∈ E, s(f)⊆ r(e)T_(e,f). Hence the Cuntz-Krieger family {T_(e,f), Q_f} generates {s_e | e∈ E}. Thus, Im(π)=C^*(s_e | e∈ E) and π is an isomorphism between C^*(Γ̃) and C^*(s_e | e∈ E). Let HΓ=(V, E, r, s) be a finite hypergraph with only quasi perfect paths, no sinks and C^*(HΓ) be generated by {s_e | e∈ E}, Γ̃ be its dual graph. Then C^*(Γ̃)≅ C^*(HΓ). For specific hypergraphs we thus get an isomorphism between the hypergraph C^*-algebra and the graph C^*-algebra of its dual graph. This cannot be true in general – this would not be in line with the non-nuclear examples of hypergraph C^*-algebras. Also, there cannot be a non-amenable hypergraph with only quasi perfect paths and no sinks such that C^*(HΓ) is generated by {s_e | e∈ E}. As an immediate consequence of the above corollary, we may formulate a Gauge-Invariant Uniqueness Theorem for hypergraph C^*-algebras under the above specific assumptions on the hypergraph. Let HΓ=(V, E, r, s) be a finite hypergraph with only quasi perfect paths, no sinks and C^*(HΓ) be generated by {s_e | e∈ E}. Let {P_v,S_e} be a Cuntz-Krieger HΓ-family in a C^*-algebra B with each P_v≠ 0. If there is a continuous action β: → Aut(B) such that the gauge action γ commutes with the canonical *-homomorphism π:C^*(HΓ) → B, i.e. π∘γ_z=β_z ∘π for all z ∈. Then π is injective. We use Cor. <ref> and the Gauge-Invariant Uniqueness Theorem for graphs. The above version of a Gauge-Invariant Uniqueness Theorem for hypergraph C^*-algebras is not very deep, as it directly relies on the Gauge-Invariant Uniqueness Theorem for graph C^*-algebras. However, it can be taken as a starting point for proving more general Gauge-Invariant Uniqueness Theorems for hypergraph C^*-algebras – or rather for specifying the class of hypergraphs for which it holds. It seems that for the class of hypergraph C^*-algebras, it is more appropriate to speak of a Gauge-Invariant Uniqueness Property and to determine the class of hypergraphs possessing it. Finally, let us derive from Prop. <ref> a proof of the Gauge-Inivariant Uniqueness Theorem for ultragraphs. This fact has been known before, <cit.>[Prop. 5.5]. [Gauge-Invariant Uniqueness Theorem for ultragraph C^*-algebras] Let 𝒢 be an ultragraph without sinks. Then the Gauge-Invariant Uniqueness Theorem is valid. We check that ultragraphs fulfill the assertions of Theorem <ref>. Since the source of each edge in an ultragraph is given by one vertex, combining the second and third hypergraph relations yields p_v=∑_e ∈ E, v =s(e)s_es_e^*. Since all vertices emit at least one edge, this equality is valid for all vertices and hence, the C^*-algebra is generated by the partial isometries. Furthermore, each path is automatically quasi perfect, as r(e)∩ s(f)≠ 0 implies that r(e)∩ s(f)=s(f), as s(f) consists of just one vertex. As seen in Theorem <ref>, the Gauge-Uniqueness Theorem holds for finite hypergraphs with only quasi perfect paths, no sinks and whose C*-algebra is generated by the partial isometries s_e. Do hypergraphs with these properties exist that are not ultragraphs – or are ultragraphs without sinks exactly the hypergraphs with quasi perfect paths whose C^*-algebra is generated by its generating partial isometries? This could be interesting for further research. § MOVES ON HYPERGRAPHS In this section we discuss basic moves to manipulate hypergraphs. These moves play an important role in the classification of graph C^*-algebras up to stable isomorphism, see <cit.>. We introduce four of these moves, adapt them to the hypergraph setting and investigate the corresponding C^*-algebras. The construction of the moves is motivated from graph C^*-algebras, see for instance <cit.>. For readers familiar with the theory of symbolic dynamics note that the moves we consider are closely related to flow equivalence of shifts spaces <cit.>. The main result of this section is, that as soon as these moves are performed at vertices which locally behave like graphs (or ultragraphs), we observe a behaviour similar to the moves for graphs. A hypergraph is called locally ultra at vertex w, if for all e∈ E, the assertion w∈ s(e) implies s(e)={w}. Recall from Section 3.7 that | r(e)|≥ 2 is a feature we can circumvent, by decomposition of ranges. So, it is really the property | s(e)|≥ 2, which separates hypergraphs from ultragraphs (| s(e)|=1) and graphs (| s(e)|=| s(e)|=1). In other words, if w∈ s(e) implies | s(e)|=1, then the graph is “like an ultragraph/graph” at vertex w, indeed. The following theorem summarizes the main contents of this section; it will be proven step by step in the sequel. Let HΓ be a hypergraph, and let HΓ be locally ultra at some vertex w (possibly satisfying some further mild assumptions on w). The moves S, R, I and O produce hypergraphs HΓ' and *-homomorphisms π:C^*(HΓ')→ C^*(HΓ) respectively, such that Im(π) is a full corner in C^*(HΓ). In the case of Move O, π is even a *-isomorphism. The definition of the moves, the precise statements of the theorem for the respective moves, and the proof of this theorem will be given in the subsequent subsections. Note that unlike in the graph case, the *-homomorphisms π for moves S, R and I are not necessarily injective. Hence, we only have a weaker form of Morita equivalence in the hypergraph case. In the graph and ultragraph cases, the Gauge-Invariant Uniqueness Theorem can be applied in order to obtain injectivity of π, so again, the Gauge-Invariant Uniqueness Property for hypergraphs plays a role here, when speaking about moves. §.§ Definition of the moves Let HΓ=(V, E, r,s) be a finite hypergraph. Let w ∈ V be a source. The hypergraph HΓ_S obtained by application of move SMove!S - Removing a source is defined as V_S:=V∖{w}, E_S:=E∖{e | w ∈ s(e)}, s_S:=s_|E_S, r_S:=r_|E_S. We call HΓ_S the hypergraph obtained by removing the source w from the hypergraph HΓ. Let HΓ=(V, E, r,s) be a finite hypergraph. Let w ∈ V be a vertex that emits exactly one edge f and only one vertex x≠ w emits to w. The hypergraph HΓ_R obtained by application of move RMove! R - Reduction at a non-sink is defined as V_R:=V∖{w}, E_R:=E∖(r^-1({w}) ∪{f}) ∪{e_f | e ∈ E, r(e)={w}}, s_R(e)=s(e), s_R(e_f)=s(e), r_R(e)=r(e), r_R(e_f)=(r(e)∖{w})∪ r(f). Let HΓ=(V, E, r,s) be a finite hypergraph and w be a vertex that is not a sink. We partition the set of outgoing edges in finitely many nonempty sets: {e ∈ E | w ∈ s(e)}=ℰ_1 ∪…∪ℰ_n. The hypergraph HΓ_O obtained by performing move OMove! O - Outsplitting on HΓ is defined by V_O :=V∖{w}∪{w^1,…,w^n}, E_O :={e^1 | e ∈ E, w ∉ r(e)}∪{e^1, … , e^n | e ∈ E, w ∈ r(e)}, r_O(e^i) := r(e) if i=1 and w ∉ r(e) (r(e)∖{w}) ∪{w^1} if i=1 and w ∈ r(e) w^i if i>1 and w ∈ r(e), s_O(e^i) := s(e) if w ∉ s(e) (s(e)∖{w}) ∪{w^j} if w ∈ s(e) and e ∈ℰ_j. We call HΓ_O the hypergraph obtained by outsplittingOutsplitting HΓ at w. Let HΓ=(V, E, r,s) be a finite hypergraph and w be a vertex that is not a source. We partition the set of incoming edges in finitely many nonempty sets: {e ∈ E | w ∈ r(e)}=ℰ_1 ∪…∪ℰ_n. The hypergraph HΓ_I obtained by performing move IMove! I - Insplitting on HΓ is defined by V_I :=V∖{w}∪{w^1,…,w^n}, E_I :={e^1 | e ∈ E, w ∉ s(e)}∪{e^1, … , e^n | e ∈ E, w ∈ s(e)}, r_I(e^i) := r(e) if i=1 and w ∉ r(e) (r(e)∖{w}) ∪{w^j} if e^i ∈ℰ_j, s_I(e^i) := s(e) if w ∉ s(e) (s(e)∖{w}) ∪{w^1} if i=1 and w ∈ s(e) {w^i} i=2,…, n. We call HΓ_I the hypergraph obtained by insplittingInsplitting HΓ at w. §.§ Move S – removing a source We now begin with proving Thm. <ref> step by step, or rather move by move. For graphs move S leads to Morita equivalent C^*-algebras <cit.>, as one can show that C^*(HΓ_S) is isomorphic to a full corner of C^*(HΓ). For hypergraphs we only get the following proving the statement for move S in Thm. <ref>. Let HΓ=(V, E, r, s) be a finite hypergraph with source w and assume that HΓ is locally ultra in w. Then there is a *-homomorphism π: C^*(HΓ_S)→ C^*(HΓ) such that the *-subalgebra Im(π) is a full corner in C^*(HΓ). We show that {p_v, s_e | v ∈ V_S, e∈ E_S} is a Cuntz-Krieger HΓ_S-family in C^*(HΓ). The first two hypergraph relations hold in general, even without the restriction on the source w since the move does not change anything at the corresponding edges and vertices. For the third hypergraph relation we note that by the given restriction for each v ≠ w it follows that e ∈ E_S for each edge e∈ E with v∈ s(e), since w ∉ s(e). Thus we get by the third hypergraph relation of C^*(HΓ) p_v ≤∑_e ∈ E, v ∈ s(e)s_es_e^*=∑_e ∈ E_S, v ∈ s(e) s_es_e^* ∀ v ∈ V_S. Hence the universal property yields the canonical *-homomorphism π: C^*(HΓ_S)→ C^*(HΓ) sending the canonical generators q_v ↦ p_v for all v ∈ V_S and t_e ↦ s_e for all e ∈ E_S. We define the projection p:=∑_v∈ V_Sp_v and claim first that Im(π)=pC^*(HΓ)p. Since p_v =pp_vp ∀ v ∈ V_S, s_e =(∑_v∈ s(e)p_v)s_e(∑_v ∈ r(e)p_v)=p(∑_v∈ s(e)p_v)s_e(∑_v ∈ r(e)p_v)p ∀ e ∈ E_S, the image of the canonical generators is contained in pC^*(HΓ)p. Hence, Im(π)⊆ pC^*(HΓ)p. On the other hand it holds for all paths μ in HΓ by Proposition <ref> * ps_μ= 0 if s(μ)=w s_μ∈ C^*(p_v,s_e) else, * ps_μ^*=s_μ^*, * s_μp=s_μ, * s_μ^*p= 0 if s(μ)=w s_μ^*∈ C^*(p_v,s_e) else. We consider a general element s_μ_1^ϵ_1...s_μ_n^ϵ_n≠ 0 with paths μ_j in HΓ, ϵ_j∈{1,*} and ϵ_j≠ϵ_j+1. Since w is a source, we get that only the first and last isometries in s_μ_1^ϵ_1...s_μ_n^ϵ_n can correspond to edges with source w. Thus we can use the relations above and get that ps_μ_1^ϵ_1...s_μ_n^ϵ_np ∈ span{p_v, s_e|v∈ V_S, e∈ E_S}⊆ Im(π). Hence pC^*(HΓ)p⊆ Im(π). Combining both parts we get the claimed equality. To show that the corner pC^*(HΓ)p is full, let I be a closed two-sided ideal containing the corner. Thus I contains {p_v, s_e | v ∈ V_S, e ∈ E_S} by definition of p and Proposition <ref>. Then we note, that for all e ∈ E with s(e)=w we have p_r(e)∈ I and hence s_e=s_ep_r(e)∈ I. Given our special case, we get by combining the second and third hypergraph relation, that p_w=∑_e ∈ E, s(e)=ws_es_e^*∈ I, as a linear combination of elements in the ideal. Hence, I contains all generators of C^*(HΓ) and must thus be equal to it. §.§ Move R – reduction at a non-sink For graphs the definition of the edge e_f in move R just yields the path ef. To simplify the upcoming proof, we restrict ourselves to moves R at particular vertices w such that e_f=ef. This is not really a restriction, since we can transform any finite hypergraph into a hypergraph with this condition using the decomposition of ranges in Theorem <ref>. Let HΓ=(V, E, r, s) be a finite hypergraph with vertex w ∈ V that emits exactly one edge f and only one vertex x emits to w. Then there is a *-homomorphism π: C^*(HΓ_R)→ C^*(HΓ) such that the *-subalgebra Im(π) is a full corner in C^*(HΓ). The elements {Q_v | v∈ V_R} and {T_y | y ∈ E_R} defined as Q_v:=p_v, T_y:= s_e if y=e∈ E∖(r^-1({w}) ∪{f}) s_ef if y=e_f ∈{e_f | e ∈ E, r(e)={w}} form a Cuntz-Krieger HΓ_R-family in C^*(HΓ). Indeed, the elements Q_v are clearly mutually orthogonal projections and the elements T_y are partial isometries since ef are a quasi perfect paths. The first hypergraph relation for e∈ E_R follows directly from the first hypergraph relation for HΓ, since w∉ r(e) implies that the range is completely contained in V_R. For e_f∈ E_R we have T_e_f^*T_e_f=s_ef^*s_ef=s_f^*s_e^*s_es_f=s_f^*p_r(e)s_f=s_f^*p_wp_r(e)s_f=s_f^*p_ws_f=s_f^*s_f=p_r(f)=Q_r_R(e_f). The condition T_y^*T_z=0 for y≠ z follows directly from s_e^*s_g=0 for e≠ g. The second hypergraph relation is again clear for e ∈ E_R. For e_f∈ E_R we get T_e_fT_e_f^* = s_efs_ef^*=s_es_fs_f^*s_e^*≤ s_ep_s(f)s_e^*=s_ep_wp_s(f)s_e^*=s_es_e^*≤ p_s(e)=Q_s_R(e_f) while we used in the last step that w ∉ s(e) since e≠ f. It remains to check the third hypergraph relation. We have for all vertices v ∈ V_R Q_v =p_v ≤∑_e ∈ E, v ∈ s(e)s_es_e^* =∑_e ∈ E, v ∈ s(e), w ∉ r(e)s_es_e^*+∑_e ∈ E, v ∈ s(e), w ∈ r(e)s_es_e^*. At this stage we need the restriction that s(f)=w to get s_fs_f^*=p_w and thus by Lemma <ref> that s_es_e^*=s_es_fs_f^*s_f. With this we get =∑_e ∈ E, v ∈ s(e), w ∉ r(e)T_eT_e^*+∑_e ∈ E, v ∈ s(e), w ∈ r(e)T_e_fT_e_f^* =∑_y ∈ E_R, v ∈ s_R(y)T_yT_y^*. Thus, we get by the universal property the *-homomorphism π: C^*(HΓ_R)→ C^*(HΓ) which maps the generators q_v ↦ Q_v for all v ∈ V_R and t_y ↦ T_y for all y ∈ E_R. We define the projection p:=∑_v ∈ V_Rp_v. Then Q_v =p_v=pp_vp, T_e =s_e=p_s(e)s_ep_r(e)=pp_s(e)s_ep_r(e)p, T_ef =s_es_f=p_s(e)s_es_fp_r(f)=pp_s(e)s_es_fp_r(f)p, where we used that w ∉ s(e) for e ≠ f, w ∉ r(e) for e ∈ E∖(r^-1({w}) ∪{f}) and that w ∉ r(f) as s(f)=w ≠ x. Thus Im(π) is a subset of the corner pC^*(HΓ)p. To show that the corner is contained in Im(π) we first consider some properties of the crucial edges in r^-1({w}) ∪{f} and the interaction of the corresponding partial isometries. Let μ=μ_1…μ_n be a path in HΓ. We then have: (i) If μ_1=f, it holds by Lemma <ref> that ps_μ=pp_ws_μ=0 and similarly s_μ^*p=0. The first hypergraph relation gives that s_f^*s_e=s_e^*s_f=δ_e,fp_r(f)=Q_r(f) for all e ∈ E. (ii) If μ_j=f, for j=2,… n, by the definition of paths we must have μ_j-1∈ r^-1({w}) and hence s_μ_j-1s_μ_j=T_μ_j-1_f. (iii) If μ_j ∈ r^-1({w}), for j=1,…, n-1, we get again by the definition of paths that μ_j+1=f and hence s_μ_js_μ_j+1=T_μ_j_f. (iv) If μ_n ∈ r^-1({w}), we get by Lemma <ref> that s_μ p=s_μ p_w p=0 and similarly p s_μ^*=0. Furthermore, by definition of paths and the fact that only one vertex emits to w, s_μ_n s_e^*≠ 0 if and only if e ∈ r^-1({w}). Since in our special case s_fs_f^*=p_w and r(μ_n)={w} we get for e ∈ r^-1({w}) again by Lemma <ref> that s_μ_n s_e^*=s_μ_np_w s_e^*=s_μ_n s_fs_f^*s_e^*=T_μ_n_fT_e_f^*. Similarly we can show that s_es_f^*=T_e_fT_μ_n_f^*. Combining these properties, we get pSp ∈ Im(π), for a general element S:=s_μ_1^ϵ_1… s_μ_n^ϵ_n∈ C^*(HΓ) where μ_1,…, μ_n are paths in HΓ and ϵ_j∈{1,*}, ϵ_j≠ϵ_j+1. Prop. <ref> then yields the claim. It remains to show that the corner is full. Let I be a closed two-sided ideal containing the corner pC^*(HΓ)p. Then I contains all projections corresponding to the vertices in V_R. Consider e∈ r^-1({w}). Then e ≠ f and w ∉ s(e) and hence p_s(e)∈ I. Thus s_e=p_s(e)s_e ∈ I by properties of the ideal. The first Cuntz-Krieger relation then gives p_w=s_e^*s_e∈ I. Hence by Proposition <ref>, the ideal contains the unit and hence it must be all of C^*(HΓ). Thus the corner is not contained in a proper closed two sided ideal and is thus full. §.§ Move O – outsplitting Let HΓ=(V, E, r,s) be a finite hypergraph and w be a vertex that is not a sink and let HΓ be locally ultra at w. Let HΓ_O be the hypergraph obtained by outsplitting HΓ at w. Then C^*(HΓ) ≅ C^*(HΓ_O). Let {q_v | v ∈ V_O}, {t_e | e ∈ E_O} be the universal Cuntz-Krieger HΓ_O-family. We define a Cuntz-Krieger HΓ-family in HΓ_O by P_v := q_v if v ≠ w ∑_i=1^n q_w^i if v = w, S_e := t_e if w ∉ r(e) ∑_i=1^n t_e^i if w ∈ r(e). Indeed, the elements P_v are mutually orthogonal projections and the elements S_e are clearly partial isometries if w ∉ r(e). For the other case we note that the ranges of e^1,… , e^n are disjoint, to get the required result. For the first hypergraph relation the case w ∉ r(e) is straightforward. For w ∈ r(e) we get: S_e^*S_e =∑_i=1^nt_e^i^*t_e^i =∑_i=1^nq_r_O(e^i)=q_r(e)∖ w+ ∑_i=1^nq_w^i=P_r(e)∖ w+P_w =P_r(e). By the hypergraph relations of HΓ_O we know that t_e^i^*t_f^j=0 for e ≠ f or i ≠ j, which implies S_e^*S_f=0 for e ≠ f. For the second hypergraph relation we get for w ∉ r(e): S_eS_e^* =t_e^1t_e^1^*≤ q_s_O(e^1) = P_s(e) e ∉ℰ_j q_s(e)∖ w + q_w^j≤ q_s(e)∖ w +∑_j=1^nq_w^j =P_s(e) e ∈ℰ_j. For w ∈ r(e) it follows using the first equation: S_eS_e^* =∑_i=1^nt_e^it_e^i^* ≤∑_i=1^nq_s_O(e^i). Similar as in the equation above we get q_s_O(e^i)≤ P_s(e) for all i=1,… n. Using the definition of the order it follows that ∑_i=1^nq_s_O(e^i)≤ P_s(e) which yields the required result. We finally tackle the last hypergraph relation. For v ≠ w we have P_v =q_v ≤∑_x ∈ V_O, v ∈ s_O(x)t_xt_x^* = ∑_e ∈ E, v ∈ s(e), w ∉ r(e)t_e^1t_e^1^*+∑_e ∈ E, v ∈ s(e), w∈ r(e)∑_i=1^nt_e^it_e^i^* =∑_e ∈ E, v ∈ s(e), w ∉ r(e)t_e^1t_e^1^*+∑_e ∈ E, v ∈ s(e), w∈ r(e)(∑_i=1^nt_e^i) (∑_i=1^nt_e^i)^* =∑_e ∈ E, v ∈ s(e), w ∉ r(e)S_eS_e^*+∑_e ∈ E, v ∈ s(e), w∈ r(e)S_eS_e^* =∑_e ∈ E, v ∈ s(e)S_eS_e^*. For v=w we can duplicate the above calculation to get for each i=1,…,n that q_w^i≤∑_e ∈ℰ_iS_eS_e^*. Using that {e ∈ E | w ∈ s(e)}=ℰ_1∪…∪ℰ_n and S_eS_e^* are mutually orthogonal projections we get P_w=∑_i=1^nq_w^i≤∑_e ∈ E, w ∈ s(e)S_eS_e^*, which completes the proof of the Cuntz-Krieger family. To obtain a Cuntz-Krieger HΓ_O-family in HΓ we define Q_v := p_v if v ≠ w^j ∑_e ∈ℰ_j s_es_e^* if v = w^j, T_e^i := s_e if w^j ∉ r_O(e^i) s_eQ_r_O(e^i) if w^j ∈ r_O(e^i). Since we assumed that for all e ∈ E with w ∈ s(e) it follows w=s(e), it follows for all vertices v≠ w that (s_es_e^*)p_v=0=p_v(s_es_e^*). Hence, since the sets ℰ_j are disjoint and the projections p_v are mutually orthogonal, we get using the first hypergraph relation of C^*(HΓ) that the projections Q_v are mutually orthogonal. Furthermore we get that Q_w^ip_w=Q_w^i, which will be useful later on. The first relation for e^1 ∈ E_O with w∉ r(e) is obvious. In case that w ∈ r(e) we get for e^1 T_e^1^*T_e^1 =Q_r_O(e^1)s_e^*s_eQ_r_O(e^1) =Q_r_O(e^1)p_r(e)Q_r_O(e^1) =(Q_w^1+p_r(e)∖{w})p_r(e)(Q_w^1+p_r(e)∖{w}) = Q_w^1+p_r(e)∖{w} =Q_r_O(e^1), using that Q_w^jp_v=δ_v,wQ_w^j. The case for i=2,…,n follows similarly using Q_r_O(e^i)=Q_w^i. By the first hypergraph relation for C^*(HΓ) and the orthogonality of the projections we get T_e^i^*T_f^j=0 for e^i≠ f^j. Furthermore, using these results it is straightforward to see that the elements T_e^i are partial isometries. For the second hypergraph relation we again consider the case i=1 and w ∈ r(e) first. We have T_e^1T_e^1^*=s_es_e^*≤ p_s(e)=Q_s_O(e^1) if s(e)≠ w ∑_f ∈ℰ_js_fs_f^*=Q_w^j=Q_s_O(e^1) if s(e)= w, where we used the assumption that either w ∉ s(e) or w=s(e) for all e ∈ E and that the elements s_fs_f^* are mutually orthogonal projections. For w =s(e) we have using Q_r_O(e^i)≤ 1 T_e^iT_e^1^*=s_eQ_r_O(e^i)s_e^*≤ s_es_e^* which can be estimated similar to the previous case. Finally we check the third hypergraph relation. We note, that the assumption that w ∈ s(e) implies w =s(e) leads to p_w=∑_j=1^nQ_w^j. For v ∈ V∖{w} we then have using the third hypergraph relation for C^*(HΓ) Q_v =p_v ≤∑_e ∈ E, v ∈ s(e)s_es_e^* =∑_e ∈ E, v∈ s(e), w∉ r(e)T_e^1T_e^1^* + ∑_e ∈ E, v∈ s(e), w∈ r(e)s_ep_r(e)s_e^* =∑_e ∈ E, v∈ s(e), w∉ r(e)T_e^1T_e^1^* + ∑_e ∈ E, v∈ s(e), w∈ r(e)s_e(∑_j=1^nQ_w^j+p_r(e)∖{w})s_e^* =∑_e ∈ E, v∈ s(e), w∉ r(e)T_e^1T_e^1^* + ∑_e ∈ E, v∈ s(e), w∈ r(e)(s_e(Q_w^1+p_r(e)∖{w})s_e^*+∑_j=2^ns_eQ_w^js_e^* ) =∑_e ∈ E, v∈ s(e), w∉ r(e)T_e^1T_e^1^* + ∑_e ∈ E, v∈ s(e), w∈ r(e)∑_j=1^nT_e^jT_e^j^* =∑_e^i ∈ E_O, v ∈ s_O(e^i)T_e^iT_e^i^*. For v = w^j we have by definition that Q_w^j=∑_e ∈ E, w^j ∈ s(e)s_es_e^*. Hence the same calculation as above yields the required result. Using the above Cuntz-Krieger families we get the canonical *-homomorphism π : C^*(HΓ) → C^*(HΓ_O), p_v ↦ P_v, s_e ↦ S_e, π̃ : C^*(HΓ_O) → C^*(HΓ), q_v ↦ Q_v, t_e ↦ T_e. Straightforward calculations show that both *-homomorphisms are inverse to each other on the generators. Thus they are inverse on the whole C^*-algebras and we get the required isomorphism. §.§ Move I – insplitting Before taking care of move I, we have a look at the indelay which introduces vertices to delay the arrival of an edge on its range. One can define this even in a more general setting with a so called Drinen range vector as done in <cit.>. We only consider the special case needed for the connection to move I, which we consider afterwards. The constructions and proofs in the following section are adapted from <cit.> and extended to the hypergraph setting. The upcoming proofs are again quite technical and deal with similar case distinctions as seen for move O. We will thus only highlight the critical steps. Let HΓ=(V, E, r,s) be a finite hypergraph and w be a vertex that is not a source. We partition the set of incoming edges in finitely many nonempty sets: {e ∈ E | w ∈ r(e)}=ℰ_1 ∪…∪ℰ_n. The hypergraph HΓ_D obtained by an indelayIndelay of HΓ at w is defined by V_D:=V∖{w}∪{w^1,…,w^n}, E_D:=E ∪{f^1,… f^n}, r_D(e):= r(e) if w ∉ r(e) (r(e)∖{w}) ∪{w^j} if w ∈ r(e), r_D(f^j):=w^j, s_D(e):= s(e) if w ∉ s(e) (s(e)∖{w}) ∪{w^1} if w ∈ s(e), s_D(f^j):=w^j+1. The next proposition uses some kind of locally ultra property at the range rather than at the source. Let HΓ=(V, E, r,s) be a finite hypergraph, w be a vertex that is not a source such that w ∈ r(e) implies r(e)={w}. HΓ_D be the hypergraph obtained by an indelay of HΓ at w. Then there is a surjective *-homomorphism from C^*(HΓ_D) onto a full corner of C^*(HΓ). Let {q_v, t_e } be the universal Cuntz-Krieger HΓ_D-family. Then {P_v | v ∈ V}, {S_e | e ∈ E} defined as P_v := q_v if v ≠ w q_w^1 if v = w S_e := t_e if w ∉ r(e) t_et_f_j-1… t_f_1 if w ∈ℰ^j forms a Cuntz-Krieger HΓ-family in HΓ_D. The hypergraph relations follow by mainly straightforward calculations. We mention shortly the critical tricks. For the first hypergraph relations note that f_j… f_1 are perfect paths for all j ∈{1,… n-1}. Thus, for all e ∈ℰ_j it follows that S_e^*S_e= q_r_D(f_1)=q_w_1=P_w. This result also explains, why we have to add the assumption that w ∈ r(e) implies w=r(e). The assumption implies furthermore, that ef_j… f_1 is a perfect path and since q_w_j=t_f_j-1t_f_j-1^* by the construction of the edges f_j ∈ E_D this leads to S_eS_e^*=t_et_e^* for all e ∈ E, which is crucial for the second hypergraph relation. For the third hypergraph relation we use that v ∈ s_D(e) implies v ∈ s(e) and w_1 ∈ s_D(e) implies w ∈ s(e). Combining this with the hypergraph relations of HΓ_D shows that the given elements form a Cuntz-Krieger HΓ-family. By the universal property we get the canonical *-homomorphism π:C^*(HΓ)→ C^*(HΓ_D), p_v ↦ P_v, s_e↦ S_e. Let F:=V∖{w}∪{w^1}⊆ V_D and p:=q_F. By definition of P_v we get P_v=pP_vp. For all e∈ E we have s_D(e)⊆ F. For e ∈ E with w ∉ r(e) we have r_D(e)⊆ V∖{w}⊆ F and for e ∈ℰ_j we have r_D(ef_j-1… f_1)=w_1⊆ F. Hence by applying Proposition <ref> we get S_e=pS_ep. Thus, the image of the generators of C^*(HΓ) is contained in pC^*(HΓ_D)p and Im(π)⊆ pC^*(HΓ_D)p. To see the converse we consider a general element S:=t_μ_1^ϵ_1… t_μ_m^ϵ_m for paths μ_1, …, μ_m in HΓ_D and ϵ_1, …, ϵ_m ∈{1,*}. We show that pSp∈ Im(π). We first have a deeper look at a path μ=e_1… e_k in HΓ_D. (i) If e_j ∈ E for j=1, …, k, we have s_D(μ)⊆ F. Hence pt_μ=t_μ=S_μ∈ Im(π). (ii) If e_j ∈ E for j=1, …, k and w_i ∉ r_D(μ), we have r_D(μ)⊆ V∖{w}. Hence t_μ p=t_μ=S_μ∈ Im(π). On the other hand, if w_i∈ r_D(μ) we have w_i =r_D(μ) by assumption and hence t_μ p=0∈ Im(π). (iii) If e_j=f_l for some j∈{ 1, …, k} and l∈{1,…,n-1}, either the whole path f_l… f_n-1 is contained in μ or e_k=f_l+k-j. If e_1≠ f^l and e_k≠ f_l+k-j, the path ν:=e_j-1f_l… f_n-1 is contained in μ and hence t_μ=s_e_1… S_ν… s_e_k∈ Im(π). Hence it remains to consider the cases, when the path starts or ends with an element in {f_1,…, f_n-1}. (iii.a) If e_1=f^l we have s_D(μ)=w_l∉ F and hence pt_μ=0∈ Im(π). On the other hand, if we have a second path α such that t_α^*t_μ≠ 0 or tμ^*t_ν≠ 0, we must have α_j=f_l+j-1 since we have perfect paths. Hence using properties of perfect paths the elements vanish and we are left with paths μ', α' which dies not contain the elements {f_1, …, f_n}. Hence t_μ'=s_μ'∈ Im(π). (iii.b) If e_k=f_l+k-j we have t_μ p=0∈ Im(π). A similar argument as in the last step shows, that the interaction with another path α cancels the elements t_f_j and we are left with t_μ'=s_μ'∈ Im(π). Combining these arguments it follows that pSp∈ Im(π) and hence pSp = Im(π). It remains to show that the corner is full. Let I⊆ C^*(HΓ_D) be a closed two-sided ideal containing pC^*(HΓ_D)p. Then I contains the projections q_v=pq_vp for all v ∈ V∖{w}∪{w_1}. Since s_D(e)⊆ V∖{w}∪{w^1} for all e ∈ E, we get t_e=pt_e∈ I for all e ∈ E. Since ℰ_j≠∅, for each vertex w_j there exists an edge e ∈ E such that w_j=r_D(e). Hence since p_w_j=t_e^*t_e∈ I. Thus all canonical projections are contained in the ideal, and thus by Proposition <ref> the unit is contained in the ideal. This shows that the ideal must be all of C^*(HΓ_D) and the corner is full. Now we can connect the indelay with move I and receive isomorphic C^*-algebras. Let HΓ=(V, E, r,s) be a finite hypergraph and w be a vertex that is not a source and let HΓ be ultra locally at w. The incoming edges of w be partitioned into disjoint sets ℰ_1 ∪…∪ℰ_n. Let HΓ_D and HΓ_I be the corresponding hypergraphs formed by an indelay and an insplitting respectively. Then C^*(HΓ_D)≅ C^*(HΓ_I). Let {p_v, s_e} and {q_v, t_e} be the canonical generators of C^*(HΓ_I) and C^*(HΓ_D) respectively. We define a Cuntz-Krieger HΓ_I-family in C^*(HΓ_D) by P_v:=q_v, S_e^i:=t_e if i=1 t_f_i-1… t_f_1t_e if i=2,…, n. Indeed, for the first hypergraph relation, we note that by definition, r_D(e)=r_I(e^i) for all e ∈ E. Thus we get for i=1 that S_e^1^*S_e^1=P_r_I(e^1). For i=2,…,n we get using the fact that the f_j build perfect paths S_e^i^*S_e^i= t_e^*t_f_1^*… t_f_i-1^*t_f_i-1… t_f_1t_e=t_e^*t_e=q_r_D(e)=P_r_I(e^i). It follows directly that for e^i≠ g^j we have S_e^i^*S_g^j=0. The second hypergraph relation for i=1 we follows using that s_D(e)=s_I(e^1) for all e∈ E S_e^1S_e^1^*=t_et_e^*≤ q_s_D(e)=P_s_I(e^1). For i=2,…,n we get, using again that we deal with perfect paths S_e^iS_e^i^* =t_f_i-1… t_f_1t_et_e^*t_f_1^*… t_f_i-1^* ≤ t_f_i-1^*t_f_i-1 = q_w^i =P_w^i =P_s_I(e^i). To check the third hypergraph relation, consider first v ≠ w^i for i>1. Then we have that v ∈ s_D(e) if and only if v ∈ s_I(e^1). Recalling that S_e^1=t_e we get P_v=q_v ≤∑_e ∈ E_D, v ∈ s_D(e)t_et_e^* ≤∑_e ∈ E_I, v ∈ s_I(e)S_eS_e^*. In case of v = w^i for i>1 we get the result using the perfect paths P_w^i =q_w^i =t_f_i-1t_f_i-1^* =t_f_i-1… t_f_1t_f_1^*… t_f_i-1^* =t_f_i-1… t_f_1q_w_1t_f_1^*… t_f_i-1^* ≤ t_f_i-1… t_f_1(∑_e ∈ E_D, v ∈ s_D(e)t_et_e^*)t_f_1^*… t_f_i-1^* =∑_e ∈ E_D, v ∈ s_D(e)t_f_i-1… t_f_1t_et_e^*t_f_1^*… t_f_i-1^* =∑_e^i ∈ E_D, w^i ∈ s_I(e^i)S_e^iS_e^i^*. On the other hand we can define a Cuntz-Krieger HΓ_D-family in C^*(HΓ_I) by Q_v :=p_v T_e :=s_e^1 T_f_j :=∑_e ∈ E, w ∈ s(e)s_e^j+1s_e^j^*. For e ∈ E, the first hypergraph relation follows using r_I(e^1)=r_D(e). For the remaining edges {f_1,…, f_n-1} it holds using that r_I(e^j+1)=r_I(e^j) and using that s_I(e^i)=w^i for i=2,…,n T_f_j^*T_f_j =(∑_w ∈ s(e)s_e^js_e^j+1^*)(∑_w ∈ s(e)s_e^j+1s_e^j^*) =∑_w ∈ s(e)s_e^js_e^j+1^*s_e^j+1s_e^j^* =∑_w ∈ s(e)s_e^js_e^j^* =p_w^j =Q_w^j =Q_r_D(f_j). For e ∈ E the second hypergraph relation follows directly since s_I(e^1)=s_D(e). Similar as for the first hypergraph relation, we get for the remaining edges {f_1,…, f_n-1} using that s_I(e^i)=w^i for i=2,…,n T_f_jT_f_j^* =(∑_w ∈ s(e)s_e^j+1s_e^j^*)(∑_w ∈ s(e)s_e^js_e^j+1^*) =∑_w ∈ s(e)s_e^j+1s_e^j^*s_e^js_e^j+1^* =∑_w ∈ s(e)s_e^j+1s_e^j+1^* =p_w^j+1 =Q_w^j+1 =Q_s_D(f_j). The third hypergraph relation for v ≠ w^j follows again since s_I(e^1)=s_D(e) Q_v=p_v≤∑_e^i ∈ E_I, v ∈ s_I(e^i)s_e^is_e^i^*=∑_e ∈ E_D, v ∈ s_D(e)T_eT_e^*. The case v = w^j for j=2,…, n follows directly from the calculation in (HR2a). Applying the universal property twice we get the *-homomorphisms π :C^*(HΓ_I)→ C^*(HΓ_D), p_v ↦ P_v, s_e^i↦ S_e^i π̃ :C^*(HΓ_D)→ C^*(HΓ_I), q_v ↦ Q_v, t_e ↦ T_e, which are inverse to each other. To see this, we show that both are inverse to each other on the generators. This is clear for all projections and edges except of e^i ∈ C^*(HΓ_I) with i=2,…,n and f_j∈ C^*(HΓ_D). For these we get π̃∘π (s_e^i) =π̃(t_f_i-1… t_f_1t_e) =(∑_g ∈ Es_g^is_g^i-1^*)…(∑_g ∈ Es_g^2s_g^1^*)s_e^1 =s_e^is_e^i-1^*… s_e^2s_e^1^*s_e^1 =s_e^i. Since w ∈ s(e) implies w=s(e) we get that q_w^1=∑_e ∈ E, w∈ s(e)t_et_e^*. Using this it follows π∘π̃(t_f_i) =π(∑_e ∈ Es_e^i+1s_e^i^*) =∑_e ∈ E, w∈ s(e)t_f_i… t_f_1t_et_e^*t_f_1^*… t_f_i-1^* =t_f_i… t_f_1(∑_e ∈ E, w∈ s(e)t_et_e^*)t_f_1^*… t_f_i-1^* =t_f_i… t_f_1q_w^1t_f_1^*… t_f_i-1^* =t_f^i, where we used the properties of perfect paths. Combining Proposition <ref> and Proposition <ref> we get the desired result under the locally ultra assumption. [Move I] Let HΓ=(V, E, r, s) be a finite hypergraph and w be a vertex that is not a source and let HΓ be locally ultra at w. Let HΓ_I be the hypergraph obtained by insplitting HΓ at w. Then there is a surjective *-homomorphism from C^*(HΓ_I) onto a full corner of C^*(HΓ). § CONCLUDING REMARKS This is the first article on hypergraph C^*-algebras and we have to leave many questions open. See also <cit.> for very recent follow up articles on hypergraph C^*-algebras. Here is a selection of topics for future research. (1) Recall that C^*(HΓ) is isomorphic to its dual graph C^*-algebra C^*(Γ̃) under certain conditions, see Cor. <ref>. Given two hypergraphs with similar dual graphs – do they share any properties? Are all nuclear hypergraph C^*-algebras isomorphic to the C^*-algebra of its dual graph? (2) Based on our results for finite hypergraphs a next step could be to investigate infinite hypergraphs. We already shed some light on critical steps in the definition, see Definition <ref>. Another indication can be the results on infinite ultragraphs in <cit.>. (3) Building on the specific characteristics of hypergraph C^*-algebras, one can investigate further implications of the path structure, see Section 3.5. (4) The topic of non-nuclearity offers a broad field of research questions. One can investigate concrete conditions for nuclearity and try to describe them via properties of the hypergraph, see also <cit.>. Within this context one could examine the ideal structure of hypergraph C^*-algebras and their relation to saturated and hereditary subgraphs. This is especially interesting as hereditary subalgebras of nuclear C^*-algebras are nuclear. Thus, this could be used to further enlarge the number of examples of non-nuclear hypergraph C^*-algebras. (5) Our counterexample in Example <ref> has shown that a direct generalization of the Gauge-Invariant Uniqueness Theorem is not possible. We only have it for a subclass of hypergraph C^*-algebras, see Thm. <ref>. This raises several new research questions: Can the theorem be generalized with another action? Do the restrictions under which the Gauge-Invariant Uniqueness Theorem holds already describe the ultragraph C^*-algebras (see Remark <ref>)? Are there other ways to prove injectivity of representations? In the realm of the Gauge-Invariant Uniqueness Theorem we touched the dual graph of a hypergraph. We saw that it looses information, especially, its C^*-algebra is not isomorphic, not even Morita equivalent to the initial hypergraph C^*-algebra. Multiple questions are interesting in this regard: Which information is lost? What have hypergraphs with the same dual graphs in common? Are there other constructions/generalizations of the dual graph which give more insights? (6) The manipulation of hypergraphs by moves and the corresponding changes in the associated graphs remain an exciting area of research. Based on our results for the moves S, R, I, O, further investigations can be made. In particular, by constructing counterexamples. Of particular interest is also the observation that the hypergraphs must locally look like ultragraphs in order to apply the moves. (7) For graph C^*-algebras there is already an explicit way to compute the K-groups <cit.>. This result can be applied to ultragraph C^*-algebras by Morita equivalence. How about the K-theory of hypergraph C^*-algebras? (8) In the Bachelor's thesis of the third author, we have two examples of hypergraph C^*-algebras we don't understand, <cit.>. The first one is given by V={v_1,v_2,v_3,v_4}, E={e} and r(e)={v_1,v_2}, s(e)={v_3,v_4}. It is easy to see that s_e+s_e^* is a unitary, but we have no further description of this C^*-algebra. The second example is given by V={v_1,v_2,v_3,v_4}, E={e_1,e_2} and r(e_1)={v_3,v_4}, s(e_1)={v_1,v_2}, r(e_2)={v_1,v_2}, s(e_2)={v_3,v_4}. Here, u:=s_e_1+s_e_2 is a unitary satisfying with the projection p:=s_e_1^*s_e_1 the relation up=(1-p)u, such that we obtain a (non-surjective) *-homomorphism from C^*(ℤ/ℤ_2)⋊_αℤ/ℤ_2 to C^*(HΓ), where α is the automorphism flipping the projections p and 1-p in C^*(ℤ/ℤ_2)≅ C^*(p,1). It would be interesting to understand C^*(HΓ) better for this example. There are further unclear examples in <cit.>. (9) The representation theory of hypergraph C^*-algebras is not understood. See <cit.> for some very first steps. See also the theses <cit.> for more work on hypergraph C^*-algebras. alpha § APPENDIX A — LIST OF GRAPH AND HYPERGRAPH C^*-ALGEBRAS We list some examples of graph and hypergraph C^*-algebras as overviews. We visualize them and use colored edges of different thickness to mark single edges if it simplifies the picture. [h]p0.15|p0.35|p0.4 C^*-Algebra Definition Hypergraph V={v}, E=∅ 7cm (v0) at (0,0) ; (v0) circle[radius=2pt]; M_n() V={v_1,…, v_n} E={e_1,…, e_n-1} s(e_j)=v_j+1, r(e)=v_j 7cm [ > = stealth, auto, thick] (v0) at (0,0) v_1; (v1) at (2,0) v_2; (v2) at (4,0) …; (v3) at (6,0) v_n; [->] (v1) edge (v0); (e0) at (1,0.25) e_1; [->] (v2) edge (v1); (e1) at (3,0.25) e_2; [->] (v3) edge (v2); (e2) at (5,0.25) e_n-1; (e3) at (1,0.75) ; V={v,w}E={e_1,…, e_n-1} s(e_j)=v, r(e_j)=w 7cm [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->, bend right=40] (v0) edge (v1); (e0) at (1,0.7) e_1; [->, bend left=40] (v0) edge (v1); (e1) at (1,-0.75) e_n-1; (e2) at (1,0.05) ⋮; V={v,w_1,…, w_n-1} E={e_1,…, e_n-1} s(e_j)=v, r(e_j)=w_j 7cm [ > = stealth, auto, thick] (v2) at (5,0) v; (v3) at (7.2,1) w_1; (v4) at (7,0) ⋮; (v5) at (7.4,-1) w_n-1; [->] (v2) edge (v3); (e0) at (5.8,0.7) e_1; [->, dotted] (v2) edge (v4); [->] (v2) edge (v5); (e0) at (5.8,-0.8) e_n-1; 𝒪_n Cuntz algebra, see also Prop. <ref> V={v} E={e_1,…, e_n} s(e_j)=v, r(e_j)=v 7cm [ > = stealth, auto, thick] (-0.5,-1.5) rectangle (3,1.5); (v0) at (0,0) v; [->,in=-50,out=50,loop,scale=2] (v0) edge (v0); (e0) at (1.25,0) e_1; (e0) at (1.85,0) …; [->,in=-50,out=50,loop,scale=9] (v0) edge (v0); (e0) at (2.7,0) e_n; V={v_1,…, v_n} E={e_1,…, e_n} s(e_j)=v_j, r(e_j)={v_1, …, v_n} 7cm [ > = stealth, auto, thick] (-1.5,-4) rectangle (5,3); (v0) at (0,0) v_1; (v1) at (2,2) v_2; (v2) at (4,0) …; (v3) at (2,-2) v_n; [->,out=210,in=150,loop,scale=1, blue, line width=1.7pt] (v0) edge (v0); (e0) at (0.4,1.3) e_1; [->,out=120,in=60,loop,scale=1, green, line width=1.3pt] (v1) edge (v1); (e0) at (3.5,1.35) e_2; [->,out=300,in=240,loop,scale=1, orange, line width=0.4pt] (v3) edge (v3); (e0) at (0.4,-1.25) e_n; [->, bend left=20, blue, line width=1.7pt] (v0) edge (v1); [->, bend left=20, green, line width=1.1pt] (v1) edge (v0); [->, bend left=20, blue, line width=1.7pt] (v0) edge (v2); [->, bend left=20, dotted] (v2) edge (v0); [->, bend left=20, blue, line width=1.7pt] (v0) edge (v3); [->, bend left=20, orange, line width=0.4pt] (v3) edge (v0); [->, bend left=20, green, line width=1.1pt] (v1) edge (v2); [->, bend left=20, dotted] (v2) edge (v1); [->, bend left=20, green, line width=1.1pt] (v1) edge (v3); [->, bend left=20, orange, line width=0.4pt] (v3) edge (v1); [->, bend left=20, dotted] (v2) edge (v3); [->, bend left=20, orange, line width=0.4pt] (v3) edge (v2); 𝒯 Toeplitz algebra, see also Prop. <ref> V={v,w} E={e,f} s(e)=w, r(e)=v s(f)=w, r(f)=w 7cm [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->] (v1) edge (v0); (e0) at (1,0.25) e; [->,in=-50,out=50,loop,scale=3] (v1) edge (v1); (e0) at (3.1,0) f; V={v,w} E={e,f} s(e)={w}, r(e)={v,w} 7cm [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->] (v1) edge (v0); (e0) at (1,0.25) e; [->,in=-50,out=50,loop,scale=3] (v1) edge (v1); (e0) at (3.1,0) e; V={v,w} E={e,f} s(e)={v,w}, r(e)={w} 7cm [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->] (v0) edge (v1); (e0) at (1,0.25) e; [->,in=-50,out=50,loop,scale=3] (v1) edge (v1); (e0) at (3.1,0) e; M_2(C()) V={v,w} E={e,f} s(e)=v, r(e)=w s(f)=w, r(f)=v 7cm [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->, bend left=30] (v0) edge (v1); (e0) at (1,0.6) e; [->, bend left=30] (v1) edge (v0); (e0) at (1,-0.6) f; C(S^1)*^n see also Prop. <ref> V={v_1,…, v_n} E={e} s(e)={v_1,…,v_n}r(e)={v_1,…,v_n} 7cm [ > = stealth, auto, thick] (v1) at (1.5,1.5) v_1; (v2) at (3,1) v_2; (v3) at (3,-1) …; (v4) at (1.5,-1.5) v_n-1; (v5) at (0,0) v_n; [-] (v1) edge (v2); [-] (v1) edge (v3); [-] (v1) edge (v4); [-] (v1) edge (v5); [-] (v2) edge (v3); [-] (v2) edge (v4); [-] (v2) edge (v5); [-] (v3) edge (v4); [-] (v3) edge (v5); [-] (v4) edge (v5); 𝒪_2*^n see also Prop. <ref> V={v_1,…, v_n} E={e_1, …, e_m} s(e_j)={v_1,…,v_n}r(e_j)={v_1,…,v_n} 7cm [ > = stealth, auto, thick] (v1) at (1.5,1.5) v_1; (v2) at (3,1) v_2; (v3) at (3,-1) …; (v4) at (1.5,-1.5) v_n-1; (v5) at (0,0) v_n; [-, bend left=10] (v1) edge (v2); [-, bend right=10] (v1) edge (v2); [-, bend left=10] (v1) edge (v3); [-, bend right=10] (v1) edge (v3); [-, bend left=10] (v1) edge (v4); [-, bend right=10] (v1) edge (v4); [-, bend left=10] (v1) edge (v5); [-, bend right=10] (v1) edge (v5); [-, bend left=10] (v2) edge (v3); [-, bend right=10] (v2) edge (v3); [-, bend left=10] (v2) edge (v4); [-, bend right=10] (v2) edge (v4); [-, bend left=10] (v2) edge (v5); [-, bend right=10] (v2) edge (v5); [-, bend left=10] (v3) edge (v4); [-, bend right=10] (v3) edge (v4); [-, bend left=10] (v3) edge (v5); [-, bend right=10] (v3) edge (v5); [-, bend left=10] (v4) edge (v5); [-, bend right=10] (v4) edge (v5); h]c|c|c Hypergraph Definition C^*-Algebra (v0) at (0,0) ; (v0) circle[radius=3pt]; [ > = stealth, auto, thick] (v0) at (0,0) ; (v0) circle[radius=3pt]; [->,in=-50,out=50,loop,scale=3] (v0) edge (v0); (e0) at (1.1,0) e; (6cm,2.5cm)s(e)=v, r(e)=v (3cm,2.5cm)C(𝕋) [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->] (v0) edge (v1); (e0) at (1,0.25) e; (6cm,0.5cm)s(e)=v, r(e)=w (3cm,0.5cm)M_2() [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->, bend right=30] (v0) edge (v1); (e0) at (1,0.6) e; [->, bend left=30] (v0) edge (v1); (e0) at (1,-0.6) f; (6cm,1.5cm)4cms(e)=v, r(e)=w s(f)=v, r(f)=w (3cm,1.5cm)M_2() [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->] (v1) edge (v0); (e0) at (1,0.25) e; [->,in=-50,out=50,loop,scale=3] (v1) edge (v1); (e0) at (3.1,0) f; (6cm,3cm)4cms(e)=w, r(e)=v s(f)=w, r(f)=w (3cm,3cm)τ [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->] (v1) edge (v0); (e0) at (1,0.25) e; [->,in=-50,out=50,loop,scale=3] (v1) edge (v1); (e0) at (3.1,0) e; (6cm,3cm)s(e)={w}, r(e)={v,w} (3cm,3cm)τ [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->] (v0) edge (v1); (e0) at (1,0.25) e; [->,in=-50,out=50,loop,scale=3] (v1) edge (v1); (e0) at (3.1,0) e; (6cm,3cm)s(e)={v,w}, r(e)={w} (3cm,3cm)τ [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->, bend left=30] (v0) edge (v1); (e0) at (1,0.6) e; [->, bend left=30] (v1) edge (v0); (e0) at (1,-0.6) f; (6cm,1.5cm)4cms(e)=v, r(e)=w s(f)=w, r(f)=v (7cm,1.5cm)M_2(C())≅ C^*(u,p | up=(1-p)u) [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->, bend left=30] (v0) edge (v1); (e0) at (1,0.6) e; [->, bend left=30] (v1) edge (v0); (e0) at (1,-0.6) e; (6cm,1.5cm)s(e)={v,w}, r(e)={v,w} (3cm,1.5cm)C()*^2 [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->, bend left=30] (v0) edge (v1); (e0) at (1,0.6) e; [->, bend left=30] (v1) edge (v0); (e0) at (1,-0.6) f; [->,in=-50,out=50,loop,scale=3] (v1) edge (v1); (e0) at (3.1,0) e; [->,in=120,out=-120,loop,scale=3] (v0) edge (v0); (e1) at (-1.1,0) f; (6cm,3cm)5cms(e)={v,w}, r(e)={w} s(f)={v,w}, r(f)={v} (3cm,3cm)C()*^2 h]c|c|c [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [->, bend left=30] (v0) edge (v1); (e0) at (1,0.6) f; [->, bend left=30] (v1) edge (v0); (e0) at (1,-0.6) e; [->,in=-50,out=50,loop,scale=3] (v1) edge (v1); (e0) at (3.1,0) e; [->,in=120,out=-120,loop,scale=3] (v0) edge (v0); (e1) at (-1.1,0) f; (6cm,3cm)5cms(e)={w}, r(e)={v,w} s(f)={v}, r(f)={v,w} (3cm,3cm)O_2 [ > = stealth, auto, thick] (v0) at (0,0) v; (v1) at (2,0) w; [<->, bend left=30] (v0) edge (v1); (e0) at (1,0.6) e; [<->, bend left=30] (v1) edge (v0); (e0) at (1,-0.6) f; (6cm,2cm)6cms(e)={v,w}, r(e)={v,w} s(f)={v,w}, r(f)={v,w} (3cm,2cm)O_2*^2 § APPENDIX B — LIST OF NON-AMENABLE HYPERGRAPHS In the following we list a bunch of non-amenable hypergraphs. Since we have to ensure that the remaining quotient is non-nuclear, n must be chosen sufficiently large. The crucial non-nuclear part of the hypergraph is colored blue. [h]p0.35p0.4 (B.1)V:={v_1,… v_n}E:={f} s(f):={v_1,…, v_n}r(f):={v_1,…, v_n} 8cm [ > = stealth, auto, thick, scale=0.8] (v1) at (1.5,1.5) v_1; (v2) at (3,1) v_2; (v3) at (3,-1) …; (v4) at (1.5,-1.5) v_n-1; (v5) at (0,0) v_n; [-] (v1) edge (v2); [-] (v1) edge (v3); [-] (v1) edge (v4); [-] (v1) edge (v5); [-] (v2) edge (v3); [-] (v2) edge (v4); [-] (v2) edge (v5); [-] (v3) edge (v4); [-] (v3) edge (v5); [-] (v4) edge (v5); (B.2)V:={w, v_1,… v_n}E:={e, f}s(e):={w}r(e):={v_n}s(f):={v_1,…, v_n}r(f):={v_1,…, v_n} 8cm [ > = stealth, auto, thick, scale=0.8] (v0) at (0,0) w; (v1) at (3.5,1.5) v_1; (v2) at (5,1) v_2; (v3) at (5,-1) …; (v4) at (3.5,-1.5) v_n-1; (v5) at (2,0) v_n; [->] (v0) edge (v5); (e0) at (1,0.25) e; [-, blue] (v1) edge (v2); [-, blue] (v1) edge (v3); [-, blue] (v1) edge (v4); [-] (v1) edge (v5); [-, blue] (v2) edge (v3); [-, blue] (v2) edge (v4); [-] (v2) edge (v5); [-, blue] (v3) edge (v4); [-] (v3) edge (v5); [-] (v4) edge (v5); (B.3)V:={w_1, w_2, v_1,… v_n}E:={e, f}s(e):={w_1, w_2}r(e):={v_n}s(f):={v_1,…, v_n}r(f):={v_1,…, v_n} 8cm [ > = stealth, auto, thick, scale=0.8] (w1) at (0,1) w_1; (w2) at (0,-1) w_2; (v1) at (3.5,1.5) v_1; (v2) at (5,1) v_2; (v3) at (5,-1) …; (v4) at (3.5,-1.5) v_n-1; (v5) at (2,0) v_n; [->] (w1) edge (v5); (e0) at (1,0.7) e; [->] (w2) edge (v5); (e0) at (1,-0.7) e; [-, blue] (v1) edge (v2); [-, blue] (v1) edge (v3); [-, blue] (v1) edge (v4); [-] (v1) edge (v5); [-, blue] (v2) edge (v3); [-, blue] (v2) edge (v4); [-] (v2) edge (v5); [-, blue] (v3) edge (v4); [-] (v3) edge (v5); [-] (v4) edge (v5); (B.4)V:={w, v_1,… v_n}E:={e, f}s(e):={w}r(e):={v_n-1, v_n}s(f):={v_1,…, v_n}r(f):={v_1,…, v_n} 8cm [ > = stealth, auto, thick, scale=0.8] (w1) at (1,-1.5) w; (v1) at (3.5,1.5) v_1; (v2) at (5,1) v_2; (v3) at (5,-1) …; (v4) at (3.5,-1.5) v_n-1; (v5) at (2,0) v_n; [->] (w1) edge (v5); (e0) at (1.2,-0.6) e; [->] (w1) edge (v4); (e0) at (2,-1.75) e; [-, blue] (v1) edge (v2); [-, blue] (v1) edge (v3); [-] (v1) edge (v4); [-] (v1) edge (v5); [-, blue] (v2) edge (v3); [-] (v2) edge (v4); [-] (v2) edge (v5); [-] (v3) edge (v4); [-] (v3) edge (v5); [-] (v4) edge (v5); (B.5)V:={w_1, w_2, v_1,… v_n}E:={e_1, e_2, f}s(e_1):={w, v_n-1}r(e_1):={v_n-1, v_n}s(e_2):={w, v_n-1}r(e_2):={v_n-1, v_n}s(f):={v_1,…, v_n}r(f):={v_1,…, v_n} 8cm [ > = stealth, auto, thick, scale=0.8] (w1) at (0,0) w_1; (w2) at (3.5,-3.5) w_2; (v1) at (3.5,1.5) v_1; (v2) at (5,1) v_2; (v3) at (5,-1) …; (v4) at (3.5,-1.5) v_n-1; (v5) at (2,0) v_n; [->] (w1) edge (v5); (e0) at (1,0.25) e_1; [->] (w2) edge (v4); (e0) at (3.8,-2.5) e_2; [-, blue] (v1) edge (v2); [-, blue] (v1) edge (v3); [-] (v1) edge (v4); [-] (v1) edge (v5); [-, blue] (v2) edge (v3); [-] (v2) edge (v4); [-] (v2) edge (v5); [-] (v3) edge (v4); [-] (v3) edge (v5); [-] (v4) edge (v5); (B.6)V:={v_1,… v_n}E:={e, f}s(e):={v_n-1}r(e):={v_n}s(f):={v_1,…, v_n}r(f):={v_1,…, v_n} 8cm [ > = stealth, auto, thick, scale=0.8] (v1) at (3.5,1.5) v_1; (v2) at (5,1) v_2; (v3) at (5,-1) …; (v4) at (3.5,-1.5) v_n-1; (v5) at (2,0) v_n; [->, bend left=30] (v4) edge (v5); (e0) at (2,-1) e; [-, blue] (v1) edge (v2); [-, blue] (v1) edge (v3); [-] (v1) edge (v4); [-] (v1) edge (v5); [-, blue] (v2) edge (v3); [-] (v2) edge (v4); [-] (v2) edge (v5); [-] (v3) edge (v4); [-] (v3) edge (v5); [-] (v4) edge (v5); (B.7)V:={w, v_1,… v_n}E:={e, f}s(e):={w, v_n-1}r(e):={v_n}s(f):={v_1,…, v_n}r(f):={v_1,…, v_n} 8cm [ > = stealth, auto, thick, scale=0.8] (w1) at (0,0) w; (v1) at (3.5,1.5) v_1; (v2) at (5,1) v_2; (v3) at (5,-1) …; (v4) at (3.5,-1.5) v_n-1; (v5) at (2,0) v_n; [->] (w1) edge (v5); (e0) at (1,0.25) e; [->, bend left=30] (v4) edge (v5); (e0) at (1.9,-1) e; [-, blue] (v1) edge (v2); [-, blue] (v1) edge (v3); [-] (v1) edge (v4); [-] (v1) edge (v5); [-, blue] (v2) edge (v3); [-] (v2) edge (v4); [-] (v2) edge (v5); [-] (v3) edge (v4); [-] (v3) edge (v5); [-] (v4) edge (v5); (B.8)V:={w, v_1,… v_n}E:={e, f}s(e):={w, v_n-1}r(e):={v_n-1, v_n}s(f):={v_1,…, v_n}r(f):={v_1,…, v_n} 8cm [ > = stealth, auto, thick, scale=0.8] (w1) at (0,0) w; (v1) at (3.5,1.5) v_1; (v2) at (5,1) v_2; (v3) at (5,-1) …; (v4) at (3.5,-1.5) v_n-1; (v5) at (2,0) v_n; [->] (w1) edge (v5); (e0) at (1,0.25) e; [->] (w1) edge (3.1,-1.4); (e0) at (1.5,-0.9) e; [->, bend left=20] (v4) edge (v5); (e0) at (2.1,-0.7) e; [->,in=230,out=310,loop,scale=1] (v4) edge (v4); (e0) at (3.5,-3) e; [-, blue] (v1) edge (v2); [-, blue] (v1) edge (v3); [-] (v1) edge (v4); [-] (v1) edge (v5); [-, blue] (v2) edge (v3); [-] (v2) edge (v4); [-] (v2) edge (v5); [-] (v3) edge (v4); [-] (v3) edge (v5); [-] (v4) edge (v5); (B.9)V:={w, v_1,… v_n}E:={e, f}s(e):={w}r(e):={w}s(f):={w, v_1,…, v_n}r(f):={v_1,…, v_n} 8cm [ > = stealth, auto, thick, scale=0.8] (v0) at (1.5,0) w; [->,in=150,out=210,loop,scale=3] (v0) edge (v0); (e0) at (0,0) e; (v1) at (5.5,1.5) v_1; (v2) at (7,1) v_2; (v3) at (7,-1) …; (v4) at (5.5,-1.5) v_n-1; (v5) at (4,0) v_n; [-, blue] (v1) edge (v2); [-, blue] (v1) edge (v3); [-, blue] (v1) edge (v4); [-, blue] (v1) edge (v5); [-, blue] (v2) edge (v3); [-, blue] (v2) edge (v4); [-, blue] (v2) edge (v5); [-, blue] (v3) edge (v4); [-, blue] (v3) edge (v5); [-, blue] (v4) edge (v5); [->] (v0) edge (v1); [->] (v0) edge (v2); [->] (v0) edge (v3); [->] (v0) edge (v4); [->] (v0) edge (v5); (B.10)V:={w, v_1,… v_n}E:={e, f}s(e):={w, v_n}r(e_1):={w}s(f):={w, v_1,…, v_n}r(f):={v_1,…, v_n} 8cm [ > = stealth, auto, thick, scale=0.8] (v0) at (1.5,0) w; [->,in=150,out=210,loop,scale=3] (v0) edge (v0); (e0) at (0,0) e; (v1) at (5.5,1.5) v_1; (v2) at (7,1) v_2; (v3) at (7,-1) …; (v4) at (5.5,-1.5) v_n-1; (v5) at (4,0) v_n; [-, blue] (v1) edge (v2); [-, blue] (v1) edge (v3); [-, blue] (v1) edge (v4); [-] (v1) edge (v5); [-, blue] (v2) edge (v3); [-, blue] (v2) edge (v4); [-] (v2) edge (v5); [-, blue] (v3) edge (v4); [-] (v3) edge (v5); [-] (v4) edge (v5); [->] (v5) edge (v0); (B.11)V:={w, v_1,… v_n}E:={e_1, e_2, e_3, f}s(e_1):={w}r(e_1):={w}s(e_2):={w}r(e_2):={v_n}s(e_3):={v_n}r(e_3):={w}s(f):={w, v_1,…, v_n}r(f):={v_1,…, v_n} 8cm [ > = stealth, auto, thick, scale=0.8] (v0) at (1.5,0) w; [->,in=150,out=210,loop,scale=3] (v0) edge (v0); (e0) at (-0.2,0) e_1; (v1) at (5.5,1.5) v_1; (v2) at (7,1) v_2; (v3) at (7,-1) …; (v4) at (5.5,-1.5) v_n-1; (v5) at (4,0) v_n; [-, blue] (v1) edge (v2); [-, blue] (v1) edge (v3); [-, blue] (v1) edge (v4); [-] (v1) edge (v5); [-, blue] (v2) edge (v3); [-, blue] (v2) edge (v4); [-] (v2) edge (v5); [-, blue] (v3) edge (v4); [-] (v3) edge (v5); [-] (v4) edge (v5); [->, bend left=20] (v5) edge (v0); (e0) at (2.7,0.5) e_2; [->, bend left=20] (v0) edge (v5); (e0) at (2.7,-0.5) e_3;
http://arxiv.org/abs/2405.10029v2
20240516121159
AsCL: An Asymmetry-sensitive Contrastive Learning Method for Image-Text Retrieval with Cross-Modal Fusion
[ "Ziyu Gong", "Chengcheng Mai", "Yihua Huang" ]
cs.MM
[ "cs.MM" ]
AsCL: An Asymmetry-sensitive Contrastive Learning Method for Image-Text Retrieval with Cross-Modal Fusion 1st Ziyu Gong State Key Laboratory for Novel Software Technology Nanjing University Nanjing, China ziyugong@smail.nju.edu.cn 2nd Chengcheng Mai School of Computer Science and Electronic Information/ School of Artifical Intelligence Nanjing Normal University Nanjing, China maicc@njnu.edu.cn 3rd Yihua Huang* State Key Laboratory for Novel Software Technology Nanjing University Nanjing, China yhuang@nju.edu.cn May 20, 2024 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================== The image-text retrieval task aims to retrieve relevant information from a given image or text. The main challenge is to unify multimodal representation and distinguish fine-grained differences across modalities, thereby finding similar contents and filtering irrelevant contents. However, existing methods mainly focus on unified semantic representation and concept alignment for multi-modalities, while the fine-grained differences across modalities have rarely been studied before, making it difficult to solve the information asymmetry problem. In this paper, we propose a novel asymmetry-sensitive contrastive learning method. By generating corresponding positive and negative samples for different asymmetry types, our method can simultaneously ensure fine-grained semantic differentiation and unified semantic representation between multi-modalities. Additionally, a hierarchical cross-modal fusion method is proposed, which integrates global and local-level features through a multimodal attention mechanism to achieve concept alignment. Extensive experiments performed on MSCOCO and Flickr30K, demonstrate the effectiveness and superiority of our proposed method. Image-text Retrieval, Information Asymmetry, Contrastive Learning, Cross-modal Fusion § INTRODUCTION Image-text retrieval aims to search for relevant text based on visual queries and vice versa. Early efforts contribute significantly to the learning of unified multimodal representation and visual-textual concept alignment, but it still faces the following challenge: the existing work rarely considers the problem of cross-modal information asymmetry, which neglects the fine-grained differences between modalities. The information asymmetry problem in the multimodal retrieval task refers to the unequal information capacity between different modalities, i.e., when describing the same scene, one modality may contain more or less information than the other modality. Image modality objectively describes the scene based on pixels, while text modality describes the scene based on characters or words. It should be noted that different modalities have different emphases on the description of a certain scene. For a certain image, the paired text may be partially similar to the image and partially different from the image. The information asymmetry problem aggravates the difficulty of the image-text retrieval task. As shown in Figure <ref>, for a given paired image-text pair, (img, txt), we subdivided information asymmetry into three types. (1) Asymmetry-1: The text contains redundant information that does not exist in the image. For variant txt_1, although txt_1 contains most of the objects corresponding to image img, such as “people”, “table”, and “cup”, txt_1 mistakenly contains “pizza” that is not mentioned in image. Therefore, txt_1 should be considered a negative of img. However, existing models would mistakenly retrieve txt_1 as a positive because it contains most of the information in the image. (2) Asymmetry-2: Compared to the original paired text, the variant text contains more relevant information belonging to the corresponding image. For variant txt_2, it is a more informative illustration of img, which contains more details about clothing and location without irrelevant content, and should be regarded as a positive of img. (3) Asymmetry-3: Compared to the original paired text, the variant text discards partial information contained in the given image, but still conforms to the description of the image. For variant txt_3, although the word “drink” has been deleted, it should still be considered a positive of img as it fits the description of img. However, existing models may mistakenly identify it as a negative or lower its retrieval ranking because the proportion of overlapping concepts between txt_3 and img has decreased. To address the above issues, a novel asymmetry-sensitive contrastive learning method was presented. For each fine-grained information asymmetry type, we generated corresponding positive or negative textual samples with both partially similar parts and subtly different parts, which can be leveraged in the optimization of contrastive learning and acquire more discriminative multimodal semantic representations. In particular, for Asymmetry-1, we generated negative samples by adding noise information into the embedding layer to fully enhance the diversity of the generated negatives. For Asymmetry-2 and Asymmetry-3, we generated positive samples by concatenating and truncating keywords on the original input statements, respectively, in order to enrich the diversity of semantic descriptions for the same scenario. Our method enhanced the sensitivity of the contrastive learning algorithm to the fine-grained information asymmetry across modalities by increasing the diversity of positive and negative samples, and provided semantic representation support for the subsequent multimodal retrieval task based on semantic similarity. In addition, in order to capture more sophisticated correlations across modalities, a hierarchical cross-modal fusion has also been proposed, for achieving multimodal concept alignment at both image-text and region-word levels, through a cross-modal attention mechanism. The major contributions can be concluded as follows: (1) An asymmetry-sensitive contrastive learning method was proposed to solve the fine-grained information asymmetry problem between images and texts, where corresponding positives and negatives for each asymmetry type are generated to achieve unified semantic representation for better cross-modality retrieval based on semantic similarity. (2) We also presented an image-text feature fusion and semantic alignment method based on a cross-modal attention mechanism for high-quality modality interaction, from both global and local perspectives. (3) Extensive experiments verified that our method outperformed the state-of-the-art baselines. § OUR ASCL METHODOLOGY §.§ Multi-modal Feature Representation As shown in Figure <ref>, for each input image, we extracted K region features with the pretrained Faster-RCNN model and then projected them into a D-dimensional space, denoted as V = [v_1,v_2,...,v_K ] ∈ℝ^K*D. Meanwhile, we acquired the global feature with the pretrained ResNet152 model and then transformed it into a D-dimensional vector, i.e., G∈ℝ^D. For each input text, we utilized BERT to obtain word embeddings, W = [ w_1, w_2,...,w_L ] ∈ℝ^L*D, where L is the number of words and D is the dimension of word features. §.§ Information Asymmetry and Sample Generation Information asymmetry quantifies the differences between the sets of image and text information. According to fine-grained information asymmetries, we generated different samples. Asymmetry-1: The text contains more redundant information that does not belong to the corresponding image. We used four methods to generate diverse negative samples by adding noise information to the textual embedding layers: (1) Gaussian Noise: Gaussian noise is random variation that follows a Gaussian (normal) distribution. We directly attached it to W. (2) Token Shuffling: The order of tokens in W was randomly altered. (3) Token Cutoff & Feature Cutoff: We assigned zero to a row (token) or a column (feature) of W. (4) Dropout: We randomly discarded elements in W based on a specific probability. Asymmetry-2: The text contains more relevant information that belongs to the corresponding image. For each image in the original dataset, there are five corresponding captions sharing similar concepts. We generated a long positive sentence by randomly selecting two related sentences and concatenating them into the long sentence. Asymmetry-3: The text discards partial relevant information that belongs to the corresponding image. We truncated the original text to generate a short but positive sentence, which increases the diversity of positives. §.§ Cross-modal Fusion Local Region-Word Fusion. Due to the bidirectional requirement of image-text retrieval, we defined X = [x_1,x_2,...,x_M] ∈ℝ^M*D and Y = [y_1,y_2,...,y_N] ∈ℝ^N*D to denote the given query (image or text) and the retrieved results (text or image), respectively, where x_i∈ℝ^D and y_i∈ℝ^D are represented as region-level or word-level features. In this work, we implemented two single symmetrical versions of formula. Hence, we set X := V(M:=K) and Y := W(N:=L) for the image-to-text (I2T) version, and X := W(M:=L) and Y := V(N:=K) for the text-to-image (T2I) version. “:=” is an assignment operator. V∈ℝ^K*D, W∈ℝ^L*D are defined in Section 2.1, and will be assigned to X or Y in different retrieval versions. Instead of treating all region-word pairs equally, a multi-head cross-modal attention mechanism was adopted to allocate weights to regions and words based on their contribution. Y^* =MultiHead(Query:=X,Key:=Y,Value:=Y) =Concat(head_1,...,head_i,...,head_H)Z^O where Y^*∈ℝ^M*D, Z^O ∈ℝ^D*D, H is the number of attention heads, and Concat(·) represents the concatenation operation across the feature dimension D. In this work, head_i = Att(Query_i:=XZ_i^X, Key_i:= YZ_i^Y,Value_i:=YZ_i^Y), where Att refers to the scaled dot-product attention, and Z_i^X,Z_i^Y∈ℝ^D*D/H. For I2T, the output Y^* denotes the region-attended text representations W^* = [w^*_1,w^*_2,...,w^*_K] ∈ℝ^K*D for each region, while for T2I, it refers to the word-attended image representations V^* =[v^*_1,v^*_2,...,v^*_L] ∈ℝ^L*D for each word. Here, w^*_i ∈ℝ^D represents the region-attended text representation for the i-th region, and v^*_i ∈ℝ^D represents the word-attended image representation for the i-th word. Global Image-Text Fusion. To further capture complex cross-modal correlation, we jointly mapped holistic images and sentences into a common space for global semantic consistence and heterogeneity minimization. To be specific, for the region-attended text representations W^* obtained in the process of local fusion, we first computed its average vector(denoted as W^* = ∑_i=0^Kw_i^*/K), and then projected it into a common space. The output result is W_g=X_w^T W^*, which serves as the final global representation of text, and X_w is a learnable embedding matrix. Similarly, we separately projected the average vector of V^* (represented as V^*= ∑_i=0^Lv_i^*/L) and the feature vector of the entire image (represented as G) into the common embedding space as V_g_1 = X_v^T V^* and V_g_2 = X_g^T G. X_v and X_g are all learnable embedding matrixes. The final global representation of image can be fused as V_g = Fusion(V_g_1,V_g_2). §.§ AsCL for I-T Matching The matching score between image I and text T consists of two components: local similarity score and global similarity score. According to the region-attended text representations W^* and the word-attended image representations V^*, the local matching score is formulated as, S_local(I,T) = ∑_i=1^k R( v_i, w_i^*)/2K + ∑_j=1^LR( w_j, v_j^*)/2L , where R( x, y ) = x^Ty/||x|| · ||y||. Meanwhile, the global matching score between image I and text T is represented as: S_global(I,T) = R(V_g,W_g). Based on the local and global matching scores, the similarity score between image I and text T can be defined as: S(I,T)=u_1 · S_local(I,T) + (1-u_1) · S_global(I,T) where u_1 is a hyper-parameter. On this basis, we proposed a novel asymmetry-sensitive contrastive learning method by exploiting the generated positives and negatives according to our defined asymmetry types. I-T Matching for Asymmetry-1. During training, there are N image-text pairs in a batch. According to our defined Asymmetry-1, we generated N negative sentences by adding noise. For each positive pair (I, T), we retrieved N-1 in-batch negative images {I_n}_n=1^N-1, N-1 in-batch negative sentences {T_n}_n=1^N-1, and especially N generated negative sentences {T^-_n}_n=1^N for Asymmetry-1. Therefore, the objective function for Asymmetry-1 can be formulated as follows: L_1(I,T) = e^S(I,T)/τ/e^S(I,T)/τ+∑_n=1^N-1e^S(I_n,T)/τ + e^S(I,T)/τ/e^S(I,T)/τ+∑_n=1^N-1e^S(I,T_n)/τ+ ∑_n=1^Nα_n· e^S(I,T^-_n)/τ where α_n is set to zero if the similarity score of the image-text pair exceeds the positive pair, otherwise we set α_n to 1. I-T Matching for Asymmetry-2&3. For each positive image-text pair (I, T), we generated the content-variant long sentence or short sentence as another positive sample, T^+, according to our defined Asymmetry-2 and Asymmetry-3. Here, (I,T^+) is a newly generated positive pair, while T^+ is also considered as a negative sample for other images in the same batch. Likewise, we added noise information to textual representations of these generated positives as additional negative samples. Consequently, there exist N-1 in-batch negative images {I_n}_n=1^N-1, N-1 in-batch negative sentences {T^+_n}_n=1^N-1, and N generated negative sentences {T^-_n}_n=1^N towards every positive pair (I,T^+). For Asymmetry-2 & 3, T^+ plays a similar role in Equation <ref> as T in Equation <ref>. L_2&3(I,T^+) = e^S(I,T^+)/τ/e^S(I,T^+)/τ+∑_n=1^N-1e^S(I_n,T^+)/τ +e^S(I,T^+)/τ/e^S(I,T^+)/τ+∑_n=1^N-1e^S(I,T^+_n)/τ+∑_n=1^Nα_n· e^S(I,T^-_n)/τ Overall Training. The overall training of AsCL for all types of information asymmetry is shown as follows: L_AsCL(I,T) = 1/2 L_1(I,T) + 1/2L_2&3(I,T^+) § EXPERIMENTS AND ANALYSES §.§ Experimental Settings Dataset. We conducted experiments on two benchmark datasets: MSCOCO <cit.> and Flickr30K <cit.>. (1) MSCOCO: It consists of a train split of 113,287 images, a validation split of 5,000 images, and a test split of 5,000 images. Each image is annotated with 5 sentences. We adopted the evaluation setting MSCOCO (5K), i.e., directly testing on the full 5K images. (2) Flickr30K: It contains 31,783 images, each with 5 corresponding sentences. It was split into 29,783 training images, 1,000 validation images and 1,000 testing images. Implementation Details. We adopted R@K(K=1,5,10) to evaluate performance, which measures the percentage of ground truth being retrieved among top-K results. Higher R@K indicates better performance. We optimized AsCL on one NVIDIA Tesla A100 using PyTorch library. The Adam optimizer was employed with a batch size 64 and 20 epochs. We set the dimension of joint embedding space D to 1024, the number of regions K to 36. For MSCOCO, the learning rate was set to 5e-4 with decaying 10% of every 10 epochs, τ was set to 0.05 and u_1 was set to 0.8. For Flickr30K, the learning rate was set to 2e-4 at first and declined by ten times every 10 epochs, τ was set to 0.01 and u_1 was set to 0.6. §.§ Image-Text Retrieval Results As shown in Table <ref>, our method outperformed existing state-of-art baselines across two benchmark datasets for the image-text retrieval task. For MSCOCO (5K), compared with LexLIP <cit.>, our model achieved a huge improvement of 24.6% (94.8% vs. 70.2%) in terms of I2T/R@1 and 13.5% (66.7% vs. 53.2%) in terms of T2I/R@1. For Flickr30K (1K), our model outperformed LexLIP by 7.7% (99.1% vs. 91.4%) on I2T/R@1 and 4.6% (83.0% vs. 78.4%) on T2I/R@1. Overall, according to fine-grained information asymmetry types, AsCL exhibits great effectiveness and superiority for the image-text retrieval task by leveraging corresponding generated samples. §.§ Ablation Experiments and Results Different Samples. We compared three variant models with different generated samples according to different asymmetry types: (1) “-w/o Pos.”: We only generated negative sentences for Asymmetry-1. (2) “-w/o Neg.”: We only generated positive sentences for Asymmetry-2 and Asymmetry-3. (3) “-w/o P&N”: We did not generate additional samples. As shown in Table <ref>, after removing generated positives, generated negatives and all generated samples, the results on Flickr30K (1K) decreased gradually, from 99.1% to 98.8% on I2T/R@1, from 99.8% to 99.4% on I2T/R@5, from 83.0% to 76.4% on T2I/R@1 and from 95.4% to 91.5% on T2I/R@5. Similar results can be observed on the MSCOCO (5K). The performance degradation verified the necessity of generated samples instantiated from different asymmetry types. We delved into more fine-grained sample generation strategies for each asymmetry type. The generation of negatives employs five noise addition strategies for Asymmetry-1, including “Shuffle”, “Dropout”, “Gaussian”, “Cutoff” and “Mixture”. The generation of positives involves two strategies, concatenation for Asymmetry-2 and truncation for Asymmetry-3. As illustrated in Figure <ref>, we found that: (1) The short text for Asymmetry-3 outperformed the generated long text for Asymmetry-2. We argued that although overlapping concepts between the short text and the given image has been reduced, the short text for Asymmetry-3 should still be considered as a positive sample, which helps provide more diversity perspectives for learning robust semantic representation. (2) The Rsum values of “Shuffle”, “Dropout”, “Gaussian”, and “Cutoff” were all lower than that of “Mixture”. We supposed that “Mixture” enriches the diversity of negative textual descriptions for Asymmetry-1, thereby improving retrieval performance. Rsum is the sum of R@1+R@5+R@10 in both I2T and T2I. Higher Rsum means better performance. Cross-modal Fusion. After removing cross-modal fusion (denoted as “-w/o MF” in Table <ref>), all metrics on two datasets dramatically decreased, indicating that the hierarchical modal fusion via multi-head attention mechanism helps discover latent alignment from both global and local perspectives. Objective Function. Compared to AsCL, the performance of triplet loss (denoted as “-w triplet loss” in Table <ref>) declined, with R@1/I2T from 94.8 to 89.4 and R@1/I2T from 66.7 to 58.6 on MSCOCO (5K), which verified the superiority of our proposed asymmetry-sensitive contrastive learning. §.§ Alignment and Uniformity Experiments and Results Alignment and uniformity are two key properties related to the quality of contrastive learning, which play an important role in the task of image-text retrieval. Alignment Evaluation. Alignment prefers a closer distance between positive pairs. Positive pairs consists of two parts: inter-modal image-text pairs and intra-modal text-text pairs. We separately calculated the average Euclidean distance between positive pairs in MSCOCO (5K) under our model AsCL and the ablated model AsCL_-w/o P&N. As shown in Figure <ref>, compared with representations learned from model AsCL_-w/o P&N, both the average distance between positive image-text (I-T) pairs and the average distance between positive text-text (T-T) pairs learned from AsCL decreased. These results indicated that our method acquired high-quality semantic representations, thereby achieving better inter-modality and intra-modality semantic alignment. Uniformity Evaluation. Uniformity favors a uniform distribution of feature embeddings on a hypersphere. We calculated the average Euclidean distance among all images and all texts with different semantics in MSCOCO (5K), respectively. In Figure <ref>, compared with model AsCL_-w/o P&N, the average spacing among different texts under AsCL enlarged (1.393 vs 1.391) , along with the drop of variance (0.032 vs 0.039). The average distance and variance between different images exhibited similar phenomenon. It indicated that representations learned from AsCL are more widely and evenly distributed, which verified better uniformity. §.§ Impact on Text Queries with Different Lengths We further conducted text-to-image(T2I) task based on textul queries with different text lengths. We sampled 1,000 sentences from MSCOCO and Flickr30K, respectively. From Figure <ref>, for those textual queries (especially less than 10 words), AsCL yielded better image retrieval results. One possible explanation is that we generated short sentences as positive samples based on Asymmetry-3 during training, which makes AsCL more sensitive and effective to short sentences. § RELATED WORK Previous work for image-text retrieval mainly focused on unified representation and cross-modal interaction. These methods can be categorized into two types: (1) dual encoder <cit.>, where images and text are separately encoded into a common embedding space by two individual encoders; (2) cross encoder <cit.>, where images and texts are jointly encoded by a cross encoder architecture, incorporating two heterogeneous modalities into unified forms. For the image-text retrieval task, most recent approaches <cit.> employ contrastive loss with hard negatives, i.e., samples that are more difficult to be distinguished. The optimization process of contrastive loss with hard negatives increases the similarity score of positive image-text pairs while decreasing that of negative pairs, thereby achieving better retrieval performance. Hence, selection strategies for informative samples have been extensively explored. Early works <cit.> randomly chose negatives from the original dataset for training. Subsequently, based on the generation methods, researchers have incorporated more difficult negatives into contrastive learning. In the UNITER+DG <cit.> framework, hard negative sentences were sampled based on structure relevance by using a denotation graph. Fan et al. <cit.> proposed TAGS-DC to generate synthetic sentences automatically as negative samples. Filip Radenovic et al. <cit.> presented an importance-sampling approach that reweighted negative samples within a batch, aiming to upsample harder negatives in proportion to their difficulty. Based on fine-grained information asymmetry types, our paper proposed an asymmetry-sensitive contrastive learning method with generated positives and negatives for the image-text retrieval task. § CONCLUSION In this paper, we presented a novel asymmetry-sensitive contrastive learning method. Concretely, in order to address fine-grained information asymmetry issues, we generated corresponding positives and negatives for each asymmetry type, which are fully utilized in the optimization of contrastive learning. Our approach enhances sensitivity to subtle differences between two heterogeneous modalities and achieves more discriminative multimodal semantic representations for the subsequent image-text retrieval task. Moreover, from both local and global perspectives, a hierarchical cross-modal fusion module was proposed to capture sophisticated correspondence between visual and semantic modalities through the multimodal attention mechanism. Experimental results on two widely used datasets, i.e., MSCOCO and Flickr30K, have verified that our method outperforms previous state-of-the-art baselines. § ACKNOWLEDGMENT This work is support by the National Natural Science Foundation of China (NO. U1811461, 61572250), the Jiangsu Province Science & Technology Research Grant (BE2021729), Key R&D Program Project of Nanjng Jiangbei New Area (ZDYF20200130), and the Collaborative Innovation Center of Novel Software Technology and Industrialization, Jiangsu, China. IEEEbib
http://arxiv.org/abs/2405.10114v1
20240516140547
Isospinning ${\mathbb C}P^2$ solitons
[ "Yuki Amari", "Sergei Antsipovich", "Muneto Nitta", "Yakov Shnir" ]
hep-th
[ "hep-th" ]
fig
http://arxiv.org/abs/2405.09843v1
20240516064356
Organizational Selection of Innovation
[ "Lucas Böttcher", "Ronald Klingebiel" ]
econ.TH
[ "econ.TH", "cs.MA", "physics.soc-ph", "stat.AP" ]
Böttcher and Klingebiel Organizational Selection of Innovation Organizational Selection of Innovation Lucas Böttcher Dept. of Computational Science and Philosophy, Frankfurt School of Finance and Management, 60322 Frankfurt, Germany, and Dept. of Medicine, University of Florida, Gainesville 32610, FL, USA, l.boettcher@fs.de Ronald Klingebiel Frankfurt School of Finance and Management, 60322 Frankfurt, Germany Budgetary constraints force organizations to pursue only a subset of possible innovation projects. Identifying which subset is most promising is an error-prone exercise, and involving multiple decision makers may be prudent. This raises the question of how to most effectively aggregate their collective nous. Our model of organizational portfolio selection provides some first answers. We show that portfolio performance can vary widely. Delegating evaluation makes sense when organizations employ the relevant experts and can assign projects to them. In most other settings, aggregating the impressions of multiple agents leads to better performance than delegation. In particular, letting agents rank projects often outperforms alternative aggregation rules — including averaging agents' project scores as well as counting their approval votes — especially when organizations have tight budgets and can select only a few project alternatives out of many. organizational choice; decision aggregation; resource allocation; innovation portfolios Rethinking Multi-User Semantic Communications with Deep Generative Models Eleonora Grassucci, Jinho Choi, Fellow, IEEE, Jihong Park, Senior, IEEE, Riccardo F. Gramaccioni, Student, IEEE, Giordano Cicchetti, Student, IEEE, Danilo Comminiello, Senior, IEEEE. Grassucci, R. F. Gramaccioni, G. Cicchetti, and D. Comminiello are with the Dept. of Information Engineering, Electronics, and Telecommunications of Sapienza University of Rome, Italy. Emails: {eleonora.grassucci, riccardofosco.gramaccioni, giordano.cicchetti, danilo.comminiello}@uniroma1.it. J. Choi and J. Park are with are with the School of Information Technology, Deakin University, Australia. Emails: {jinho.choi, jihong.park}@deakin.edu.au. This work was partially supported by the European Union under the Italian National Recovery and Resilience Plan (PNRR) of NextGenerationEU, partnership on “Telecommunications of the Future” (PE00000001 - program RESTART). Received – April 2024 / Accepted — ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION We examine rules for aggregating individual project evaluations so as to make organizational portfolio selection decisions. Such resource-allocation decisions have specific characteristics <cit.>. They are made intermittently, with organizations considering multiple resource-allocation alternatives at a time. They are subject to budget constraints, limiting the number of alternatives an organization can pursue. And they are made under uncertainty, exposing organizations to allocation errors. Consider how organizations select from a slate of innovation ideas <cit.>. A group of executives with different backgrounds meet periodically to review funding proposals. The resources they are permitted to allocate suffice for only a fraction of the proposals before them. Many of the proposals are outside of executives' prior experience, leading to noisy assessments. They may thus consider delegating the portfolio-selection decisions to the relatively most qualified person, or to combine their limited expertise in various ways to arrive at better decisions. Which approach to arriving at an organizational portfolio will yield the best results in expectation? The study of organizational aggregation processes is a resurgent scholarly pursuit <cit.>. Of particular relevance for our study is the finding that aggregating project approval decisions through majority voting usually leads to better outcomes than attainable through averaging scores <cit.>. Delegation is effective when relevant executive expertise is available and evident, or when organizations struggle to afford coordinating among a wider set of decision makers <cit.>. We advance such insights into aggregation by studying portfolio selection. Choosing a subset of available project alternatives is different from project approval in two ways. First, portfolio selection requires decision makers to observe a budget constraint. Second, to identify the best subset of projects to be funded, portfolio selection involves discrimination. One implication of these unique features is a different performance dimension. What matters for portfolio selection is maximizing the expected performance of the projects chosen for funding, not the performance of every project approvable in isolation. Many of the latter may in fact not make the cut. Another implication is that portfolio selection, unlike isolated project approval, involves prioritization. Rank-order approaches discussed in the social-choice literature on multi-winner voting <cit.> thus become relevant. The mathematical model of portfolio selection we develop contains heterogeneously informed, non-strategic agents who are given the task of selecting from a list of independent project proposals. Different rules for aggregating agents' selection decisions produce variation in performance. As our model is intended as a first step for studying organizational decision making at the portfolio level, we leave potentially interesting richness such as project interdependence, decision-maker interaction, and strategic behavior to future research. We find that relying on individuals is almost always inferior to aggregating multiple opinions. Majority voting performs poorly when resource constraints require organizations to be highly selective. Averaging performs better but is often outdone by a simple process of having agents produce a ranked preference list. Totaling such ranks is the most performative method of aggregation in many scenarios, inferior to delegation only when firms know that they have the right experts for evaluation. The dominance of Ranking — based on an aggregation process known as Borda count <cit.> — is due to its robustness against project-quality misclassification that degrades the performance of other selection methods like Averaging more substantially. Organizational selection of innovation thus benefits from a relatively crude ordering process that differs from the voting procedures prior work would recommend for the isolated approval of individual projects. Our work provides insights into how organizations can harness collective decision-making processes to effectively allocate resources. In the attempt to understand the management of innovation portfolios — of patents, drug candidates, or technology ventures <cit.>, for example — empirical work honed in on group-decision biases such as those concerning novelty <cit.> or commitment <cit.>. Gauging the meaningfulness of selection biases requires outlining the performance that can be achieved in the absence of bias. Our work provides such expectations for multiple selection procedures. It offers a structured answer to organizations searching for and experimenting with different aggregation methods <cit.>. Our work thus opens up avenues for future research on the performance of decision-making structures, particularly as regards rules for aggregating selection under uncertainty. § INNOVATION PORTFOLIO SELECTION The starting point of our work is the model of <cit.>. They compare the performance of collective decisions — voting and averaging — with that of individuals — anyone and experts. For detail on the research preceding the Csaszar and Eggers model, we refer to the authors' comprehensive review of the field. Since then much work, mostly non-organizational, has focused on how weighted algorithms can improve crowd wisdom <cit.>. Csaszar and Eggers' work remains the most relevant baseline for our purposes, because it considers the organizational context of projects with uncertain payoffs[<cit.> model the isolated approval of uncertain projects. They examine the effect of preference diversity on the performance of majority voting. In our model, we account for more aggregation rules, but exclude strategic behavior.] and variously informed decision makers, central features of organizational reality and part of what motivates our research. We extend the Csaszar and Eggers model to the organizational selection of multiple project candidates, subject to resource constraints. Concurrent consideration is common when organizations or investors review a list of innovative proposals and select only those alternatives they deem most worthy of receiving resources from limited budgets. They rarely approve proposals without considering funding limits and opportunity costs <cit.>, since each dollar spent is one not spent elsewhere. Organizations instead aim to make the most of the few resources at their disposal.[Firms may want to maximize returns at some level of risk. For example, financial portfolios often contain assets with potentially sub-optimal return expectations to diversify sources of risk. Our present work, however, does not require the additional consideration of hedging goals. The payoffs from projects in our model are independent of each other and none is structurally more at risk than others. Relaxing these constraints would require arbitration among multiple goals <cit.>, a phenomenon worthy of further empirical research on preferences.] Therefore, for projects to be selected into the portfolio, they need to not only clear a quality threshold such as an expected rate of return, but also be of higher quality than concurrently reviewed alternatives. Discrimination among projects not only complicates the application of the decision rules discussed in prior work. It also gives rise to an additional class of rules involving relative preferences that Csaszar and Eggers did not have to consider. This departure likely affects which rule helps organizations perform best. Relative preferences and the general problem of selecting a subset of alternatives feature in the social-choice literatures on multi-winner voting systems <cit.> and participatory budgeting <cit.>. A primary subject of inquiry in such social-choice research is how closely collective decisions reflect individual preferences, but a notable sub-stream additionally examines how collectives reach correct decisions <cit.>. Identifying the single best option in a set of more than two uncertain alternatives is a task in which majority voting still performs well with rules for tie-breaking <cit.>. We extend this insight by asking which aggregation method should be used if organizations want to select multiple projects —  the best subset of opportunities that the organizational budget allows them to afford. Here, the multi-winner literature already foreshadows the usefulness of ranking methods <cit.>. Large sets of homogeneous voters identify the correct order of noisily perceived choices more often when ranking, rather than approving, the choices <cit.>. Generating similar insights for portfolio decision rules matters. Organizations have few decision makers, and with heterogeneous expertise. Aggregating their impressions is a problem that receives attention: Firms have been found to engage in costly trial-and-error search for innovation-selection processes <cit.>. Some venture capitalists deliberately adopt minority voting <cit.>, for example, in the attempt to improve performance by reducing errors of omissions in environments where success follows a power-law distribution. Broadly speaking, however, empirical work in this area suggests that firms are not particularly effective in making selection decisions <cit.>. Our work thus aims to establish conditions under which one can expect different forms of aggregation to improve the performance of organizational portfolio selection. § MODELING PORTFOLIO SELECTION Portfolio selection occurs whenever multiple candidates vie for limited resources. While one can easily imagine a court case to be judged in isolation, with culpability determined irrespective of the outcomes from other concurrent cases, it is harder to imagine companies to decide funding for an innovation project irrespective of superior alternatives. Organizations will want to spend scarce funds on innovation projects only if they perceive future payoffs to be in excess of those of other projects. Even when organizations proceed with a single project only, the decision likely resulted from a process of selection, rather than an isolated instance of project assessment <cit.>. Therefore, we introduce selection into the organizational decision framework of <cit.> by adding a budget constraint of m≥ 1 projects, chosen from n≥ m alternatives. Agents' evaluations of projects inform an organization's selection of a subset (see Figure <ref>). We consider both m and n exogenous. In established organizations, top management sets aside a portion of organizational resources for innovation. Top management determines this budget m by gauging the need for rejuvenation and considering rival demands for the same resources <cit.>. Innovation executives, who are to be our model agents, then decide on how to spend the given budget. In real-world organizations, innovation executives might influence the budget-setting process and occasionally request increases alongside emerging opportunities <cit.>. We leave the examination of such exceptions to future research. Likewise, we treat n as independent from our agents. The project candidates reviewed at an innovation-board meeting are typically generated by personnel other than the decision makers <cit.>. The possibility that innovation executives are partial to the generation and evaluation of some but not other opportunities <cit.>, or that they may revisit initial decisions at later points <cit.>, are extension areas for future research. Following <cit.>, we characterize project candidates with two stochastic variables: t_i∼ψ represents the type, and q_i∼ϕ the quality of project i∈{1,…,n}. The distributions ψ and ϕ have supports [t,t] and [q,q], respectively (see Table <ref>). Project type t_i can be viewed as a variable describing knowledge domains. Incorporating such domains means that agents cannot assess all projects equally well, a departure from social-choice models of multi-winner elections <cit.>. Agents j∈{1,…,N} are characterized by expertise values e_j, which are distributed according to a distribution function χ with support [e,e]. Accordingly, q_i j'=q_i+η_ij denotes the quality of project i as perceived by agent j, where η_ij is distributed according to a normal distribution 𝒩(0,|t_i-e_j|) with zero mean and standard deviation |t_i-e_j|. The inclusion of a noise term accounts for uncertainty in project evaluation. Noise thus varies with domain expertise; the quantity |t_i-e_j| captures the degree to which project type matches agent expertise. This operationalization of variation in judgement quality is a plausible approximation of the organizational reality in portfolio decision making under uncertainty <cit.>. It endows agents with equally imperfect capabilities but recognizes that they may come from different backgrounds. For example, the innovation boards at pharmaceutical companies encompass experts from different therapeutic classes <cit.>. Those experts' judgements are more accurate for proposals in their own class. Domain-specific expertise has been documented to similarly influence innovation decision quality at device manufacturers <cit.> and service firms <cit.>. We thus follow <cit.> in recognizing this feature of evaluative precision in our model.[Our design choice of domain-specific expertise mirrors Hotelling models, in which actors have different distances to a focal point <cit.>. We refer the reader to <cit.> for a corresponding review. Alternatives to the Hotelling approach include belief-updating models, in which decision makers share identical priors but receive different signals <cit.> that together result in project assessments. This approach produces judgment-specific, rather than expert-specific, variation in precision <cit.>. Alternatively, one could conceive of expertise as a vector <cit.>. For instance, one dimension of expertise may pertain to environmental sustainability aspects and another to mechanical design aspects of project value. Multi-dimensional representations might reflect the micro-foundations of the decision-making challenge in more detail — yet we do not expect the replacement of the Hotelling expedience with greater knowledge dimensionality to materially affect aggregation rules' efficacy in dealing with judgment imprecision. We would welcome future research on this topic.] Building on previous work on multi-winner electoral systems <cit.>, we represent the quantities used to produce aggregated preference lists by an ordered triplet M=(Q,T,E), where Q={q_1,…,q_n}, T={t_1,…,t_n}, and E={e_1,…,e_N} denote one realization of the sets of project qualities, project types, and expertise values, respectively. Each agent sorts n projects in descending order of perceived quality. For example, in the case of n=2 available projects, agent j strictly prefers the first over the second project if her perception of project qualities satisfy q_1j'> q_2j'. In general, the relation i ≻_j k means that agent j strictly prefers project i over k, which is the case if and only if q_ij'>q_kj'. To denote the position of project i in the preference order of agent j, we use the notation pos_j(i). An aggregation rule ℛ(M,m) is a function that maps a realization of M=(Q,T,E), to a corresponding subset of m≤ n selected projects. Ties occur if the selection rule produces multiple outcomes of the same cardinality. If ties occur in our simulations, we uniformly at random select one outcome. We use f_q_(i)^(ℛ) to denote the PDF of q_(i)^(ℛ), the ordered project qualities under selection rule ℛ. The support of f_q_(i)^(ℛ) is [q,q]. The expected portfolio performance associated with q_(i)^(ℛ) under selection rule ℛ thus is 𝔼^(ℛ)[q;m,n] = ∑_i=n+1-m^n ∫_q^q q f_q_(i)^(ℛ)(q) dq . Dividing this by m would yield the corresponding expected quality per selected project. Analytic expressions of f_q_(i)^(ℛ) and analytically evaluating the integral in Eq. (<ref>) are not tractable for general selection rules ℛ. Hence, our main approach involves running Monte Carlo simulations of independent and identically distributed (i.i.d.) realizations of various selection rules. Appendix <ref> derives the bounds within which we can expect portfolio performance to vary in the simulations. For a uniform quality distribution ϕ∼𝒰(q,q), the theoretical maximum of the expected portfolio performance is 𝔼^*[q;m,n] =m [q +(q-q)2n+1-m/2n+2] It constitutes an upper limit against which we can evaluate different selections rules ℛ. § AGGREGATION RULES The aggregation rules ℛ that we adapt and examine in our Monte Carlo simulations encompass classics that are simple and distinctive, a subset of potentially endless method variants <cit.>. They encompass the voting and scoring rules considered in <cit.> plus a simple ranking rule known in the social-choice literature as Borda count <cit.>. All our rules preserve the balance of type I and type II errors in expectation <cit.> and so disregard methods involving consensus, sequencing, or hierarchies that would skew the balance <cit.>. We also assume non-strategic agents (in contrast to <cit.> or <cit.>, for example) as well as independent projects, disregarding potential benefits from composing portfolios with projects of varying novelty or knapsack efficiency <cit.>. Our decision makers neither communicate nor learn from one another or across projects <cit.>. Relaxing some of these constraints would be a natural next step for considering additional aggregation rules in future research. To transport classic aggregation rules to a portfolio-selection context, we modify them such that they impose a funding constraint at the organizational level. Selection criteria, therefore, are not based on thresholds, such as a positive average evaluation, or a majority of yes votes, that one would find in the context of isolated project approvals. Instead, organizations select into the portfolio m projects with the relatively highest scores.[Our selection rules could additionally impose a project-quality threshold. For example, few executives would suggest committing to projects that they expect to yield negative payoffs. Because the parameterization of our main model effectively prevents such projects to be among the top m (see Section <ref> and Appendix <ref>), we chose to minimize rule complexity. Future work may adopt hurdle rates as required.] The subsequent definitions thus incorporate organizational discrimination. Individual. All projects are evaluated by a single agent with expertise value e_ M, which is the mean of the expertise distribution. The organization then ranks projects based on the agent's quality perceptions and selects the top m∈{1,…,n} projects. This selection rule implements the Individual rule of <cit.> in a portfolio context. Delegation. Each project is evaluated by the agent whose expertise is most closely aligned with the project's type. These are agents whose expertise value e_j minimizes the uncertainty |t_i-e_j|). The organization then ranks projects based on experts' quality perceptions and selects the top m∈{1,…,n} projects. This selection rule implements the Delegation rule of <cit.> in a portfolio context.[Instead of delegating projects to different experts, organizations may consider assigning the responsibility to a single portfolio expert. In this scenario, all projects would be assessed by the agent whose expertise minimizes the overall uncertainty ∑_i=1^n |t_i-e_j|. The organization then ranks projects based on the expert's quality perceptions and selects the top m∈{1,…,n} projects. The expertise that minimizes the overall uncertainty is equal to the mean type. The approach is thus equivalent in expectation to the Individual rule we state above.] Voting. All projects are evaluated by all agents. Agents allocate a vote to each project for which have a positive perception of quality. The organization then ranks projects based on the number of agent votes and selects the top m∈{1,…,n} projects. This selection rule implements in a portfolio context the Voting rule used by <cit.>[Note that the model of <cit.> is not multi-candidate voting, since decision makers never consider projects concurrently. They rather (dis)approve each project in isolation. The model of <cit.>, with crowds voting for one of two projects, is closer to a multi-candidate setting.] and others <cit.>. Averaging. All projects are evaluated by all agents. The organization then ranks projects based on agents' mean quality perceptions — scores, effectively — and selects the top m∈{1,…,n} projects. This selection rule implements the Averaging rule of <cit.> in a portfolio context. Ranking. All projects are evaluated by all agents. Each agent j places the projects in a descending order of perceived quality. Each project i thus receives a position pos_j(i). The organization then ranks projects based on the sum of agents' reversed project positions, n- pos_j(i), and selects the top m∈{1,…,n} projects. This selection rule implements the Borda rule of the social-choice literature  <cit.> in a portfolio context.[Although not in the realm of uncertain innovation projects, a public example application of the Ranking rule is the Aggregate Ranking of Top Universities (<https://research.unsw.edu.au/artu/methodology>). In it, the University of New South Wales aggregates the preference lists of three agents: Times Higher Education, Quacquarelli Symonds, and ShanghaiRanking Consultancy. They each form their quality perceptions for hundreds of universities based on a list of different criteria. The rank that agents assign to a university then automatically results from these scores. The Aggregate Ranking of Top Universities could be used to create a portfolio of m best universities.] § RESULTS Our base-case analyses use the parameter values of <cit.> to enable comparisons. The number of decision makers is set to N=3, the type distribution to ψ=𝒰(0,10), the quality distribution to ϕ=𝒰(-5,5), and the noise distribution to φ=𝒩(0,|t_i-e_j|). We additionally set the number of available projects to n=100. The expertise of the agent in the Individual rule is set to a central e_ M=5. To represent the collective knowledge of an organization's decision makers, we assign each agent j∈{1,…,N} an expertise value e_j=e_ M-β+2β/N-1 (j-1), where β∈[0,5] denotes the knowledge breadth of an organization. For the given distributions, we generate i.i.d. realizations of the underlying model quantities to perform Monte Carlo simulations of all aggregation rules presented in Section <ref>. We then compare their portfolio performances, 𝔼^(ℛ)[q;m,n], and explore variation in the parameter space to probe for generality of our results. All implementation details are provided at <https://gitlab.com/ComputationalScience/multiwinner-selection>. §.§ Aggregation-Rule Performance In the base case, Ranking provides the highest performing aggregation rule for β≲ 2 (see Figure <ref>). Averaging approaches the performance of Ranking for smaller values of β as the number of selected projects, m, increases. Delegation to project experts is the most effective selection protocol for β≳ 2 (see Figure <ref>). In our portfolio-selection setting, the knowledge breadth β at which a Delegation protocol begins to outperform Ranking is larger than the reported value of β at which Delegation outperforms other protocols in a project approval setting <cit.>, at least for small budgets (, small values of m). This observation is insensitive to project-type variations as we elaborate in Section <ref>. The Delegation rule comes close to the maximum possible performance 𝔼^*[q;m,n]=44.6 (m=10) and 104.0 (m=30) for intermediate levels of knowledge breadth β. For β=0, all decision makers have the expertise e_ M of Individual decision makers. As β increases, the expertise values of all decision makers cover a broader range of project types. Hence, decision makers with expertise values close to specific project types can be selected for intermediate values of β. If β is too large, the distance between available and required expertise grows. For a general uniform type distribution ψ∼𝒰(t,t), the maximum performance of the Delegation protocol is achieved if the N decision makers have expertise values e_j^*=t+(2j-1)t-t/2N (j∈{1,…,N}) . For N=3 decision makers with ψ∼𝒰(0,10), the maximum performance of Delegation is thus realized at expertise values e_1^*=5/3 e_2^*=5 e_3^*=25/3 , that is, for β=10/3≈ 3.33 (see Figure <ref>). In contrast to project approval, Voting is not very effective in portfolio selection. To see why, consider that it aggregates binary signals only. The aggregate scale for totalling the votes of N=3 decision makers contains four levels only. Voting thus often fails to discriminate between many projects. To gauge the discrimination limitation of the Voting protocol, we conduct additional simulations with 2× 10^5 i.i.d. realizations. Among n=100 projects, 31 on average receive a full three votes from N=3 decision makers with knowledge breadth β=0. Analogously 28 projects receive three votes with β=2.5, and 23 projects with β=5. Therefore, with m=10, the Voting rule, typically selects only projects receiving a full three votes. The quality difference between the best and the 20th best project can be large but aggregate votes tend not to reveal this.[The discrimination limitation of Voting might be partially remedied by asking agents to approve m projects only. With small budgets and large choice sets, such m-approval voting <cit.> limits the number of projects that are sanctioned by all agents, providing more discrimination in the top section of the aggregated preference list of projects, and less at the bottom.] Voting thus underperforms more discriminating rules such as Ranking. With greater budgets such as m=30, Voting does relatively better. Greater knowledge breadth decreases the performance of Voting but less so than that of other aggregation rules. Therefore, as β and m increase, Averaging and Voting can achieve similar performance (see Figure <ref>b). In such situations, Voting caps the influence of erroneous classifications made by single agents with a unsuitable expertise. Averaging suffers relatively more quickly from the aggregation of erroneous estimates provided by agents with unsuitable expertise. These results remain stable even with extremely small budgets that permit the selection of m=1 project only (see Appendix <ref>). §.§ Discrimination Effectiveness Why does Ranking outperform Averaging? For some intuition, consider N=3 agents, n=3 available projects, and knowledge breadth β=0. In one realization of the agents' quality perceptions q_ij' (see Table <ref>) we have the preference orders: 1≻_1 3 ≻_1 2, 2≻_2 3 ≻_2 1, and 1≻_3 2 ≻_3 3. The organization would select project #1 first, as its sum of reversed project positions is 4. Project #2 is second-most attractive, with a sum of 3. Project #3 would be least attractive, with a sum of 2. If the organization instead used Averaging for the same data, it would select Project #2 first, as it receives a mean agent assessment of 2. Project #1 would be second-most attractive, with a mean assessment of -0.07. Project #3 would be least attractive, with a mean assessment of -0.13. The aggregate organizational preference list produced by Averaging would not list the best project first, because it is vulnerable to a single agent's misclassification. In our base model with n=100, m=10, β=0, and N=3, Ranking identifies the highest-quality project in about 63% of 2× 10^5 realizations, whereas Averaging does so in about 58% of cases (see the selection probabilities in Figure <ref>). The reason is that agents with an outlying impression of project quality can sway the aggregate selection more readily in the Averaging protocol than in the Ranking protocol. Ranking accommodates extreme inputs more readily, because uncapped quality differences are translated into capped score differences in rank orders (the maximum rank-order score difference is n-1 per agent). For the Ranking protocol to misclassify a project in aggregate, a relatively greater number of individual agents would have to concurrently misclassify. Ranking is thus particularly effective in identifying projects of extreme quality. This ability to discriminate is crucial for portfolio selection with tight budgets. Discrimination effectiveness is less relevant for larger budgets: If a 15th-best project is misclassified as 17th best, for instance, the impact on portfolio performance is marginal. Consequently, Averaging gains in relative performance when budgets also permit the selection of more moderate-quality projects. The flatter project-selection probability distribution of Averaging is more suited to more munificent budgets. Selecting more projects balances the impact of misclassifications. As m approaches n, the performance of all selection protocols becomes equal. The performance dynamic of Ranking and Averaging resembles that observed for sample mean and sample median as gauges of population values. For normal distributions, the sample mean is more efficient than the sample median in estimating the mean value of the underlying population [, the variance of the sample mean is smaller than that of the sample median <cit.>]. However, the sample median is known to be less sensitive to small-sample outliers that can introduce unwanted bias in the sample mean. In accordance with these arguments, we show in Section <ref> that Averaging achieves higher portfolio performance than Ranking for a large number of agents N. Most organizational selection committees, however, consist of only a small number of decision makers. For them, Ranking is a more effective aggregation rule than Averaging. §.§ Budgets and Choice Sets The size of an organization's innovation budget determines the number of projects m it can select. And the number of project alternatives n available and identified by an organization compose the choice set from which it can select. While the former typically pales in comparison to the size of the latter <cit.>, numbers can vary across organizations. Such variance could matter in principle, since values for m and n bound the theoretically attainable portfolio performance. The supplemental analyses reported in Appendix <ref>, however, shows that they rarely change the relative ordering of aggregation-rule performance as observed in our base-case analysis. An interesting edge case is that of small choice sets. When the cardinality of the choice set of candidate projects in our model is smaller than about n=10, and N=3 agents with knowledge breadth β=0 were to select m=1 project, Averaging outperforms Ranking (see Figure <ref>a), as it has access to more information on the underlying project quality. For larger numbers of candidate projects and small values of m, Averaging is more likely to misclassify a project. Another edge case to Ranking's dominance is found for generous budgets and zero knowledge breadth. When the number of projects m that the budget permits is not much smaller than the number of projects n available in the choice set, Averaging outperforms Ranking for β=0 (see Figure <ref>a). In such cases, the benefit of greater information provision outweighs the low risk of misclassification. This effect is restricted to organizations with homogeneous decision makers. If the knowledge breadth β takes on more realistic values above zero, such that available expertise values are better aligned with the underlying project types, the advantage of Averaging over Ranking diminishes. Figure <ref>b illustrates that with β=5, Ranking outperforms Averaging even for the smallest possible choice set with n=2 elements. Further simulations indicate that this dominance begins at even lower knowledge breadths — results for β=2.5 are consistent with those obtained for β=5. §.§ Delegation Errors Innovation projects contain novel elements for which past data offers limited guidance. Experts from some domains will have relevant experience and, through associations, may gauge the promise of novelty better than other experts. But organizations may not always know ex ante who these most suitable experts are, leading to errors in delegation. Such likelihood of delegation error is one reason for why academic journals, as well as grant institutions <cit.>, for example, seek the opinion of multiple expert reviewers without fully delegating decisions to any. In Figure <ref>, we show the selection performance of delegating to project experts as a function of delegation error r∈{0,0.5,1}. When r = 0, projects are always assigned to the most qualified expert, whereas with r = 1, projects are randomly distributed among the three available agents. In more mathematical terms, organizations assign projects with probability r/3 to any of the two least suitable agents and with probability 1-2r/3 to the most suitable  <cit.>. Detailed simulations for a larger number of values of r show that for r≳ 0.2 the Delegation protocol no longer provides a substantially better performance than Ranking for a small budget (see Figure <ref>a). The influence of delegation errors diminishes with larger budgets (see Figure <ref>b). An error of r=0.2 means that 87% of projects are evaluated by an appropriate expert. Although ascertaining delegation-error rates in prior empirical work is limited by the lack of counterfactuals, it is not hard to imagine that innovation projects, covering novel terrain by definition, are often mismatched to expertise in existing terrain. Ambiguity about the suitability of experts in evaluating innovation thus renders delegation an unattractive aggregation rule. In an alternative approach, organizations could try delegating project evaluation to a single Portfolio Expert, whose expertise minimizes the uncertainty with respect to all projects. In our main specification, this would be the agent with expertise e_ M=5, which is equal to the mean project type. A Portfolio Expert would thus perform as well as an Individual. Erroneously designating as portfolio expert one of the other agents with expertise values e_ M±β would yield a performance that is worse than that of the Individual protocol for β>0. §.§ Environmental Turbulence The performance of different selection rules does not depend only on the level of knowledge breadth in the group of decision makers but also on the distribution and range of project types. When market environments shift, the relevance of organizations' existing knowledge base diminishes. <cit.> exemplify such shifts with the technological transition from analog to digital photography, which rendered some of Polaroid's expertise less useful for project selection <cit.>. Considering additional type distributions helps us examine how different aggregation rules cope with environmental shifts. Figure <ref> reports the portfolio performance of aggregation rules for the type distributions 𝒰(5,15) and 𝒰(15,25). Performance generally decreases when the distance between required and available expertise increases. If the expertise of decision makers is close to the type of the project under evaluation, selection errors are small. Consequently, an expertise level of e_ M=5 yields smaller errors for the type distribution 𝒰(0,10) than 𝒰(5,15), for example. Ranking, however, is relatively less impacted by risk of misclassification when project-type distributions shift. The Ranking rule's performance surpasses that of error-free Delegation for knowledge breadth as wide as β≳ 2 for the base-case type distribution 𝒰(0,10), and as wide as β≳ 5 for the type distribution 𝒰(5,15). The further the project-type distribution moves from agents with relevant expertise, the greater the knowledge breadth at which Ranking outperforms even perfect Delegation. Relatively homogeneous organizations facing disruptive change would thus fare best with Ranking. §.§ Crowds versus Experts Up to this point, we kept the number of decision makers at a constant N=3. Relaxing this constraint can reveal relative differences in the marginal benefit of additional decision makers. Increasing crowd size also allows collectives to outperform experts even in settings where delegation error is absent and expertise broadly distributed <cit.>. Through approaches such as open innovation or open strategy <cit.> organizations can enlarge their pool of internal decision makers, and it would be instructive to know how large such collectives would need to be to outperform delegation to three knowledgeable project experts. IBM, a large technology firm with an in-house crowd effort, managed to have 25 colleagues review projects of its iFundIT program, though not everyone evaluated all projects <cit.>. We could take this observation as an upper bound of the number of suitable agents that organizations might feasibly recruit to the collective task of portfolio selection.[Open-science initiatives may worry less about innovation appropriation <cit.> and could thus attract larger numbers of assessors from outside the organization than IBM managed from within. EteRNA (<https://eternagame.org>), for example, enlists outsiders to select the most promising molecule designs for resource-intensive testing. Governments are another type of organization that could tap a greater pool of decision makers for selecting projects in participative-budgeting exercises, such as through the Consul project (<https://consulproject.org>)] We thus examine the number of decision makers required for collective protocols to outperform Delegation to the three project experts of our base-case parameterization (Figure <ref> illustrates crowds of N=15 and N=45). Averaging outperforms Delegation to project experts as the number of decision makers N nears 15; Ranking already does at around N=13. Voting can compete with Delegation over the whole range of knowledge breadth only with 45 or more decision makers. While Ranking outperforms Averaging with about ten or fewer decision makers, the order reverses with bigger crowds (see Figure <ref>), even at large values of knowledge breadth. In simulations with N=100,200,500,1000, this performance gap grows (2.57%, 2.86%, 3.06%, and 3.12%, respectively, at β=0). The magnitude of the growing gap might nonetheless be insufficient to justify the use of Averaging, given that such large crowds would be hard to manage and well in excess of those observed as feasible in the IBM study of <cit.>. In all studied scenarios, Voting is inferior to Averaging and Ranking. In particular, for β=0, the performance of Voting changes only very little with an increase of the crowd size, even if it is by an order of magnitude. This is because in Voting, agents make binary choices, where all projects perceived to yield positive payoffs receive approval. When noise is within bounds and expertise overlaps, there is limited benefit to soliciting more near-identical decisions from a crowd. Ranking and Averaging gain more from homogeneous crowds as they provide more fine-grained information for selection. In a converse scenario with considerable noise and/or knowledge breadth, Voting (very) slowly gains in performance with an increasing number of decision makers. Each additional decision maker adds granularity to the aggregation scale (three decision makers mean that a project can have either no, one, two, or three votes — ten decision makers would classify a project anywhere between no and ten votes, and so on). Ranking and Averaging provide granular aggregation scales even with few decision makers. §.§ Batching In the aggregation protocols we study, agents evaluate each project on its own. One could alternatively imagine agents directly comparing projects and making relative judgments — at least when there is no strict need to first provide separate assessments, such as with Voting or Ranking. In such cases, cognitive limitations might weaken comparison effectiveness as the number of candidate projects grows. At some level of n, agent evaluations may become unreliable. To guard against such scenario, one could design an evaluation regime in which individuals receive no more projects than they are able to compare reliably. The precise magnitude of such a cognitive limit c is unknown and varies with context <cit.>[The members of the Academy of Motion Pictures and Sciences, for example, rank between five and ten candidates to collectively select the Best Picture <cit.>. In the lab, participants predicting league tables appear able to rank 30-odd sports teams without apparent difficulty <cit.>. Other lab participants appeared to struggle with the comparisons necessary for the ranking of eight Kickstarter project candidates <cit.>.]. The illustrative analysis reported below sets the limit to a conservative batch size of c=10. The idea is that, when an organization's choice set is as large as the n=100 projects considered in our base-case analyses, agents could share the load and each evaluate c=10 projects only. Reducing agents' cognitive load requires proportionally more of them. The number of agents in our base-case analyses would have to go up by a factor of n/c to ensure that each project gets the same number of evaluations in the cognition-conscious batching regime. If little is known ex-ante about projects and agents, agents will receive a randomly drawn subset of c projects. The evaluation could also be shared among an organization's cohort of evaluators on the basis of preference <cit.>. A more directed approach is to allocate c projects each to N agents such that there is a match between the types of expertise required and available. The organization would ask its relatively most experienced colleagues to vote, estimate, or rank[Practical examples of delegating a subset of candidates to assessors on the basis of perceived expertise include, for example, the selection process for the Academy of Management's Technology and Innovation Management division Best Dissertation award shortlist. Documented in the literature is the selection of treatments via ranking by groups of orthodontists <cit.> before making the final selection. Moods of Norway had employees rank products of a category with which they are familiar to estimate future demand for apparel <cit.>. Geographically separate juries also select through ranking the semi-finalists for the Eurovision Song Contest. Each jury accepts a quota <cit.>.]. This makes most sense when evaluators and projects are known to span a comparable range of expertise. Finally, the organization may authorize those subgroups of evaluators to make decisions on its behalf. Innovating organizations often acknowledge limits to the comparability of projects of different departments, subdividing the overall budget and allowing departments to make their own decisions about which projects to select <cit.>. Figure <ref> reports the analysis for these approaches to batching. As the main analyses, batching is based on uniform type, quality, and expertise distributions, maintaining the number of project candidates at n=100 and the number of selected projects at m=10. We multiply the number of agents involved in each selection rule by n/c, yielding N=10 for Individual and Delegation, N=30 for Voting, Averaging, and Ranking.[If the value of n/c is not an integer, it is rounded to the nearest integer. The same goes for m/c.] Each agent receives a batch of c=10 projects to asses. Without expertise matching, the assignment of c=10 projects is uniformly at random without replacement from the pool of n=100 projects. Expertise matching is a hard problem and a thorough review of the multitude of implementation possibilities goes beyond the scope of our work. We here employ simple ordinal matching. We begin by arranging projects in ascending type order and agents in ascending expertise order. We then assign the first batch of c=10 projects to the first agent, in the case of Individual and Delegation, or the first three agents, in the case of Voting, Averaging, and Ranking. The second batch goes to the second agent(s), and so on. Agents normally submit their project votes, estimates, or ranks to a central organization for the final aggregate selection decision. In a decentralized setting, by contrast, each of n/c agents, or sets of agents, selects m/c projects. In the analysis of Figure <ref>, this means one project each. Collectively, these m selected projects make up an organization's portfolio. The results reported in Figure <ref> show that the performance of Averaging improves relative to Ranking, at least at lower levels of knowledge breadth β. This is because aggregating ten project ranks from three agents yields less granular distinctions than aggregating precise project estimates. Although agents' detailed project estimates may be flawed, the random tiebreakers often necessary in aggregating rankings are relatively more detrimental to portfolio selection. Therefore, if cognitive limitations are a concern, evaluation noise moderate, and agents plentiful, Averaging may offer a more effective batch-selection method than Ranking. The results reported in Figure <ref> also show that random batching unsurprisingly underperforms expertise batching, especially when knowledge breadth β increases. Real-world organizations will find themselves somewhere in between the random and perfect expertise assignment. Decentralizing decision rights, too, is usually a bad idea, due to the loss of being able to optimize at the portfolio rather than sub-portfolio level. Ranking, however, suffers less from decentralization than other rules. This is because the projects that would have been selected at the sub-portfolio level also often end up being selected at the portfolio level. The top projects of each batch also have the top scores in the portfolio. It is rare that the second-placed project in one batch has a greater sum of inverted ranks than the first-placed of another batch. Therefore, if cognitive limitations were a concern and addressed with batching, organizations that use a Ranking rule could more easily decentralize with less of a performance sacrifice. § DISCUSSION We extend earlier work on aggregating project approval to the context of selecting projects for resource-limited portfolios. We show that Ranking, an aggregation process specific to portfolio selection, is often more effective than Averaging and Voting, processes also available in a single-project approval context. These findings contribute to the literatures on resource allocation and aggregation, respectively. §.§ Resource Allocation Decisions The earlier work of <cit.> highlights how the choice of rules for aggregating individual decisions into an organizational one can produce meaningful performance differences. Its insights are applicable to contexts in which the (dis)approval of one project is viewed independent of the (dis)approval of other projects (see the assumptions in <cit.>, for example). Also relevant for the isolated approval of projects are attempts to aggregate project forecasts into decisions through polls or markets <cit.>. Acknowledging, however, that organizations are resource-constrained, means that not all projects that would be approvable in isolation can be funded. The challenge for organizations is to identify the subset of many possible projects that most likely maximize organizations' return on investment <cit.>. Solving such optimization problems involves preference orders, derived from aggregating individual agent preferences. In this portfolio-selection context, the relative performance differences among aggregation rules reported by <cit.> do not hold. While the earlier study is justifiably concerned with the performance of all approved projects, the focus for portfolio selection is on the performance of only those projects that organizations can afford to fund. That is because resource allocation in organizations is not only about correctly identifying projects with positive returns but about selecting the subset of projects that deliver the greatest return on the investable resources <cit.>. Our work reveals how totaling project ranks provided by agents offers the highest aggregation-rule performance in many circumstances that one might find in organizations. The ranking-rule performance is below the optimum that omniscient decision makers could attain, but it is above the performance of other rules for aggregating decisions with limited information. By highlighting performance dynamics of decision aggregation rules, our work provides a normative foundation for descriptive research on resource allocation. Crucially, it provides a baseline benchmark for work attempting to highlight behavioral inefficiencies in portfolio selection (<cit.>, for example). It also provides a reference point without which empirical observations of portfolio-selection rule performance <cit.> are hard to interpret. In future research, it would be valuable to expand upon our work by considering additional factors such as differential project types and costs <cit.> or dynamic features <cit.>. Further opportunities arise from merging our insights with those on managing portfolios under uncertainty, including the partial allocation and potential re-allocation of resources over time <cit.>, the allocation of resources by more than one organization <cit.>, or the incentive structures used to populate choice sets for portfolio selection <cit.>. §.§ Organizational Decision Aggregation Our work further contributes to the resurgent interest in aggregation structures <cit.>. In particular, we shed further light on situations in which one might expect expert decision makers to outperform variously aggregated crowds <cit.>. Specifically, choosing the best subset from a range of options non-trivially departs from previously studied contexts due to its greater need for discrimination. Although delegation performs highest in settings where experts can be found, the often imperfect organizational process of matching uncertain projects with the right domain specialists in turbulent environments calls for alternative approaches. Having multiple imperfectly informed decision makers weigh in on the same project propositions typically improves on the eventual performance that an organization can expect from its portfolio. Ranking does so most effectively. When agents rank projects, they provide an assessment of how the quality of one project compares to that of others. Most rankings in real-world organizations are necessarily imperfect amalgamations of multiple criteria, ranging from profit forecasts over strategic fit to short-term versus long-term considerations[In the project and portfolio literatures, rankings already feature heavily: They are outputs of organizational prioritization efforts <cit.>. Our work underlines that rankings also have a place as inputs to those efforts.]. Using subjective rankings as an input to the ultimate organizational decision thus makes intuitive sense. In contrast to seemingly more precise project-value appraisals, crude rankings often help select higher-performing project portfolios. Future field research on aggregating agents' preference lists may benefit from the fact that Ranking endogenizes a concern over strategic behavior. Employees who know of their organization's resource constraints may not provide project assessments or votes that reflect their true beliefs, in an attempt to lift some projects above the cut-off <cit.>. With Ranking, agents maximize the chances of their organization funding their favorite projects by ranking projects in the preferred order. There is little room for gaming by submitting preference orders that fail to reflect beliefs (unlike with Averaging, for example, where agents could inflate forecasts for their preferred projects, and deflate those for less preferred candidates). Similarly beneficial is that Ranking appears more tolerant of biased inputs, requiring fewer agents to select optimal sets than alternative aggregation methods <cit.>. Ranking methods that additionally reflect preference intensity <cit.> might be similarly robust to strategy and bias, presenting a straightforward extension possibility for our current work. Further opportunities for future research include the extension of our work by accounting for quadratic voting effects <cit.>. An alternative direction to consider involves devising algorithms that can help identify effective selection rules akin to algorithmic solutions of multi-winner election problems <cit.>. Future work may also explore project-cost distributions <cit.>, skill heterogeneity and weighted aggregation <cit.>, strategic and coordinated selection behavior <cit.>, as well as vote trading <cit.>. Further potential exists in recognizing the impact of organizational competition, which may favor contrarian rules such as minority voting <cit.>. §.§ Managerial Application The performance of aggregation rules depends on the availability of information about the knowledge held by employees and the size of the innovation budget and choice set. Organizations looking for simplified guidance on which rule to adopt may consider the illustrative decision tree presented in Figure <ref>. In portfolio-selection situations with many choices, tight budgets, and unclear expertise, our work recommends the Ranking rule. The performance of the Ranking protocol is good news for two reasons. One is that many organizations already informally aggregate rankings in some form when they meet in a committee setting. Such committee meetings often involve discussions that contribute to the convergence of individuals' assessments of projects <cit.>. The Ranking protocol deals well with low belief heterogeneity, attenuating the potential impact of convergent beliefs. Therefore, organizational reality may not be too far from feasible aggregation optima. The second reason is that organizations probably have an easier time implementing a Ranking protocol than some of the other aggregation mechanisms reviewed here. Rather than having to submit seemingly precise project-value assessments, decisions makers simply have to put projects into a preference order. This may become more taxing as the number of projects to consider increases, but aggregation through ranking is somewhat forgiving of the accuracy of assessments that lead to the preference orders. It often produces innovation portfolios with the relatively highest performance outcomes. Given these advantages, what could go wrong? A few aspects of the Ranking rule's practical application might be worth paying attention to in future empirical research. A first step would be studies of safeguards against loss in judgement quality that stems from the greater cognitive load of comparing a potentially large number of candidates simultaneously <cit.>. Our main models sidestep this issue by having agents score projects individually, which only later amounts to a ranked project list for each agent (the aforementioned procedure for the Aggregate Ranking of Top Universities does the same). Innovating organizations might get close to this ideal by having project proposals presented one at a time, making comparisons easier to avoid <cit.>. To then further mitigate potential order effects, whereby evaluators compare a focal proposal to what they can remember about those evaluated previously <cit.>, organizations might wish to shuffle the sequence of proposals for each evaluator. The setting of a committee meeting does not easily lend itself to different evaluation sequences but asynchronous online assessments would. One challenge for such asynchronous assessments would be to ensure that assessors evaluate all candidates. Incomplete rankings akin to those submitted on participatory budgeting platforms such as Stanford's[<https://pbstanford.org>], for example — where assessors receive no compensation and thus prioritize attention — not only provide less information (as per Section <ref>) but also open the door to herding and influencing. Moreover, future research could examine the effectiveness with which organizations are able to aggregate the rankings that their employees provide. Without an explicit aggregation rule, managers' processing of rank information may differ from their processing of scores. For example, <cit.> suggest that people sometimes treat rankings as a shortcut heuristic for separating top candidates from a cohort, forfeiting more fine-grained discrimination. Automating aggregation may thus prove useful in guarding against processing biases. In any case, adding Ranking to the list of aggregation methods to be examined behaviorally <cit.> seems apt given its conceptual benefits for innovation-portfolio selection. §.§ Conclusion Our work contributes to the understanding of resource allocation in innovation portfolios. Increasing data availability and scholarly interest in the topic have revealed interesting patterns of behavior when multiple organizational actors make joint decisions. Yet, interpreting their relevance requires a normative foundation. In providing one, we show that some insights, such as about the effects of knowledge breadth and delegation error, apply in the context of portfolio project selection decision just as they do in the better-known but less applicable context of isolated project approvals. However, portfolio selection additionally requires discrimination between projects and the relative performance ordering of suitable decision-aggregation rules thus changes. Our results indicate that Ranking is the most effective selection rule, especially in unstable market environments, and often outperforms Averaging even for small values of knowledge breadth. In many scenarios, ranking is preferable to other aggregation rules. Delegation makes sense when companies can assign each project to a relevant expert. But environmental turbulence can cause Ranking to outperform even perfect Delegation. Multi-candidate selection may be relevant not only in the context of innovation, but also for other organizational decisions under uncertainty <cit.>, including investments in personnel or technology. Our work thus contributes to a better understanding of selection regimes within organizations. The choice of an appropriate aggregation rule is a discretionary element in the design of resource allocation processes that has substantial performance implications. § CODE AVAILABILITY Our source codes are publicly available at <https://gitlab.com/ComputationalScience/multiwinner-selection>. Lucas Böttcher acknowledges financial support from hessian.AI and the Army Research Office (grant W911NF-23-1-0129) informs2014 § PERFORMANCE SENSITIVITY TO BUDGET AND CHOICE SET §.§ Theoretical Performance Limits For m∈{1,…,n} selected projects, we use 𝔼^*[q;m,n] to denote the theoretical performance maximum. It can be derived from the order statistic <cit.> of the underlying project quality distribution.[Order statistics have also been employed by <cit.> to mathematically characterize an aggregation rule in which N individuals with varying levels of expertise evaluate a single project (n=1).] For n realizations of the random variable q_i ∼ϕ, one obtains the order statistic q_(i) by sorting the realizations q_i in ascending order. The value of 𝔼^*[q;m,n] is found by evaluating 𝔼^*[q;m,n]=∑_i=n+1-m^n ∫_q^q q f_q_(i)(q) dq , where f_q_(i)(q) denotes the probability density function (PDF) of the order statistic q_(i) with support [q,q]. Dividing 𝔼^*[q;m,n] by the number of selected projects m, yields the expected theoretical quality maximum per selected project 𝔼_m^*[q;m,n]=𝔼^*[q;m,n]/m. The PDF of the order statistic q_(i) in Eq. (<ref>) is given by f_q_(i)(q) = n!/(i-1)!(n-i)!ϕ(q) [ Φ(q) ]^i-1 [ 1 - Φ(q) ]^n-i , where Φ(x) is the CDF of the project quality distribution ϕ(x). Using the transformation u=Φ(q), the PDF of the quantity u_(i) is a beta distribution <cit.> with shape parameters i and n+1-i. That is, f_u_(i)(u) = n!/(i-1)!(n-i)! u^i-1 [ 1 - u ]^n-i . The mean of the beta distribution f_u_(i)(u) is i/(n+1), so we can compute the theoretical performance maximum for any uniform distribution 𝒰(q,q) according to 𝔼^*[q;m,n] = mq +(q-q)∑_i=n+1-m^ni/n+1=m [q +(q-q)2n+1-m/2n+2] . In the limit of a large number of candidate projects, the proportion of selected projects, or selectiveness <cit.>, at which 𝔼^*[q;m,n] reaches its peak value is[Real-world organizations cannot confidently gauge the shape of the distribution of payoffs from the innovation projects proposed to them, and they consequently determine the size of their budget more pragmatically <cit.>. Additionally, Appendix <ref> shows how portfolio performance depends on the shape of the underlying quality distribution.] m^*/n= 1/(1-q/q) if q≤ 0,q>0 1 if q> 0,q>0 0 if q< 0,q<0 . In the same limit, the maximum expected quality per selected project 𝔼_m^*[q;m,n]=𝔼^*[q;m,n]/m can be approximated by 𝔼_m^*[q;m,n]≈q+(q-q)[1-m/(2n)] [see Eq. (<ref>)]. As the selectiveness m/n approaches 1, the quantity 𝔼_m^*[q;m,n] approaches q+(q-q)/2 for a uniform quality distribution 𝒰(q,q). Also notice that 𝔼_m^*[q;m,n] approaches the upper limit of the underlying uniform quality distribution, q̅, as m/n approaches 0. According to Eq. (<ref>), for any uniform quality distribution 𝒰(q,q), the performance measures 𝔼^*[q;m,n] and 𝔼_m^*[q;m,n]=𝔼^*[q;m,n]/m depend on both the number of projects m that an organization's budget permits to select and the total number of projects n in the choice set. For sufficiently large numbers of projects, we have 𝔼_m^*[q;m,n]≈q+(q-q)[1-m/(2n)]. Figure <ref>a shows 𝔼_m^*[q;m,n] as a function of m for different values of n and for a uniform quality distribution 𝒰(-5,5). The largest value of 𝔼_m^*[q;m,n] for constant n is 5(n-1)/(n+1), and it is obtained for m=1. For n=20,50,100, the corresponding values are about 4.5, 4.8, and 4.9, respectively. As m approaches n, the maximum performance 𝔼_m^*[q;m,n] approaches 0 (see Figure <ref>a). To visualize the dependence of 𝔼_m^*[q;m,n] on n for constant m, we show in Figure <ref>b the performance measure 𝔼_m=1^*[q;m=1,n] for a single selected project as a function of n. The quality distribution is again 𝒰(-5,5). We observe that an increase in the number of available projects from 0 to 10 is associated with a large increase in 𝔼_m=1^*[q;m=1,n] from 0 to more than 4. Increasing n from 10 to 100 yields a much smaller increase in 𝔼_m=1^*[q;m=1,n] of about 0.8. In the limit n→∞, the maximum performance 𝔼_m=1^*[q;m=1,n] approaches q̅=5. Although more available projects yield a larger value of 𝔼_m^*[q;m,n] for a given m, possible performance gains that are associated with further increasing n may be negligible if m/n≪ 1. In addition to achieving a high performance per selected project, one often wishes to optimize the overall portfolio performance, whose theoretical maximum is 𝔼^*[q;m,n]=m𝔼_m^*[q;m,n]. For a uniform quality distribution 𝒰(q,q), we have 𝔼^*[q;m,n]=m{q+(q-q)[1-m/(2n)]} [see Eq. (<ref>)]. Figure <ref> shows 𝔼^*[q;m,n] as a function of m for three different uniform quality distributions. The optimum of 𝔼^*[q;m,n] is attained for m^*=q+q+2qn/2(q-q) . For the quality distributions used in Figure <ref>, the corresponding rounded values of m^* are 40, 50, and 60. Using Eq. (<ref>), the optimal selectiveness m^*/n approaches 1/(1-q/q) in the limit of large n. Given the constraint 0 ≤ m^*/n ≤ 1 for the optimal selectiveness, we obtain Eq. (<ref>) in the large-n limit. §.§ Relative Performance Ordering The maximum performance per selected project, 𝔼_m^*[q;m,n], provides an upper bound for the aggregation-rule performance 𝔼_m^(ℛ)[q;m,n]. Figure <ref>a,b shows that the (m,n)-dependence of 𝔼_m^(ℛ)[q;m,n] associated with different aggregation rules is similar to the (m,n)-dependence of 𝔼_m^*[q;m,n]. The performance measure 𝔼^(ℛ)[q;m,n] exhibits a pronounced initial increase with n, gradually diminishing in magnitude for larger values of n (see Figure <ref>c,d). In accordance with the results presented in the main text (Section <ref>), Ranking and Averaging perform well for a small knowledge breadth (see Figure <ref>a,c) while Delegation (without delegation errors) is closer to the maximum performance for large values of knowledge breadth (see Figure <ref>b,d). The relative performance ordering of aggregation rules is consistent with the results reported in the main text. §.§ Relative Performance with Very Small Budgets In Section <ref>, we studied the portfolio performance of different aggregation rules for m=10,30. In Figure <ref>, we compare the portfolio performance of all the aggregation rules considered for smaller budgets with m=1,3. The relative positioning of the aggregation rules in these two cases aligns with the case where m=10. However, for m=1 and intermediate knowledge breadths, there are no discernible performance differences between Ranking and error-free Delegation. § PERFORMANCE SENSITIVITY TO PROJECT DISTRIBUTIONS In addition to the uniform quality distribution ϕ=𝒰(q,q) discussed in the main text, we explore the impact of variations in the quality distribution on portfolio performance. We examine two additional quality distributions: (i) a truncated normal distribution 𝒩(0,1,q,q) with a mean of zero and unit variance, and (ii) a power-law distribution with an exponent of -1/2. Both additional distributions have support [q,q], and for our analysis, we set q=-5 and q=5, consistent with the base-case analysis in the main text. In contrast to a uniform quality type distribution where projects occur with equal probability, regardless of their quality, the truncated normal distribution leads to fewer occurrences of projects with large negative or positive qualities. Projects with qualities close to zero have higher probabilities of occurrence in this distribution. Regarding the power-law distribution we consider, on average, approximately 70% of the projects will have negative quality. Moreover, only about 5% of the project qualities will exceed a value of 4. This distribution represents scenarios where only a relatively small number of projects are associated with relatively large positive qualities. For the two quality distributions under consideration, Figure <ref> charts the performance of aggregation rules for n=100 projects and m=10,30 selected projects as a function of knowledge breadth. Both distributions encompass fewer high-quality projects than the uniform distribution analyzed in the main text, resulting in a lower overall portfolio performance. The shown differences in portfolio performance are consistent with the findings reported in the main text. Voting performs substantially better in the simulations with truncated normal distributions centered on zero than in simulations with uniform project-quality distributions. This is because of the many projects with near-zero quality. Whereas uniform distributions favor decision rules that detect relative quality differences between projects, narrow zero-centered normal distributions predominantly require detection of whether or not a project has a positive value. Voting's coarseness more easily achieves the latter. The effectiveness of Voting in portfolio selection from normally distributed projects thus comes to resemble its effectiveness in approving uniformly distributed projects in isolation <cit.>.
http://arxiv.org/abs/2405.09862v1
20240516073849
Performance of Quantum Networks Using Heterogeneous Link Architectures
[ "Kento Samuel Soon", "Naphan Benchasattabuse", "Michal Hajdušek", "Kentaro Teramoto", "Shota Nagayama", "Rodney Van Meter" ]
quant-ph
[ "quant-ph", "cs.NI" ]
IEEEexample:BSTcontrol plain plain Performance of Quantum Networks Using Heterogeneous Link Architectures This work was supported by JST [Moonshot R&D] [JPMJMS226C]. Kento Samuel Soon13, Naphan Benchasattabuse23, Michal Hajdušek23, Kentaro Teramoto4, Shota Nagayama42, and Rodney Van Meter13 1Faculty of Environment and Information Studies, Keio University Shonan Fujisawa Campus, Kanagawa, Japan 2Graduate School of Media and Governance, Keio University Shonan Fujisawa Campus, Kanagawa, Japan 3Quantum Computing Center, Keio University, Kanagawa, Japan 4Mercari R4D, Mercari, Inc., Tokyo, Japan Received September 15, 1996; accepted March 16, 1997 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================== The heterogeneity of quantum link architectures is an essential theme in designing quantum networks for technological interoperability and possibly performance optimization. However, the performance of heterogeneously connected quantum links has not yet been addressed. Here, we investigate the integration of two inherently different technologies, with one link where the photons flow from the nodes toward a device in the middle of the link, and a different link where pairs of photons flow from a device in the middle towards the nodes. We utilize the quantum internet simulator QuISP to conduct simulations. We first optimize the existing photon pair protocol for a single link by taking the pulse rate into account. Here, we find that increasing the pulse rate can actually decrease the overall performance. Using our optimized links, we demonstrate that heterogeneous networks actually work. Their performance is highly dependent on link configuration, but we observe no significant decrease in generation rate compared to homogeneous networks. This work provides insights into the phenomena we likely will observe when introducing technological heterogeneity into quantum networks, which is crucial for creating a scalable and robust quantum internetwork. Quantum Network, Quantum Internet, Quantum Link Architecture, Heterogeneity, Quantum Entanglement, Interoperability. § INTRODUCTION Distributing Bell pairs between any arbitrary locations we want is a crucial issue in quantum information systems <cit.>. Many critical real-world applications exist, such as information-theoretically-secure quantum cryptography <cit.>, precise quantum sensing <cit.>, distributed and blind quantum computation <cit.>, and high-speed distributed consensus algorithms <cit.>. There also exist experimental feats that demonstrate long-distance entanglement distribution—see Young et al. for a recent summary of those experiments <cit.>. However, generating entanglement over a long distance or even in a complex, large-scale data center network is difficult due to inherent fiber attenuation. In classical communication, it is conventional to copy and resend the data in the middle of a link with the help of repeaters, but for the quantum case, we cannot use the same approach due to the no-cloning theorem <cit.>. One promising method for entanglement distribution is generating link-level Bell pairs and utilizing quantum repeaters to perform entanglement swapping <cit.> and entanglement purification <cit.>, expanding a set of link-level entanglement shared between two adjacent nodes into a long-length distributed entanglement. Utilizing such repeaters allows for dealing with photon loss and performing error management to distribute high-quality Bell pairs. In order to efficiently generate link-level Bell pairs, a systematic architecture of link-level Bell pairs, namely, quantum link architectures, needs to be addressed. Jones et al. conducted a high-level theoretical analysis among memory-based link architectures <cit.>. They discovered that the choice of link architectures can significantly affect the generation rate, depending on the hardware parameters. Additionally, Soon et al. extends the link architecture utilizing the entangled photon pair source (EPPS) by offering practical implementation details and demonstrates that the generation time saturates after reaching a certain quantum memory capacity <cit.>. Azuma et al. proposed all-photonic repeaters <cit.>, which are quantum link architectures that do not specifically use memory qubits. All-photonic architectures introduce redundancy in the transmitted photonic states, which leads to near-deterministic entanglement generation. A quantum network does not need to limit itself to a single link architecture. A proper combination of link architectures can play a crucial role in enhancing the performance of a data center-sized quantum multicomputer or crossing the boundaries from a multicomputer's internal network to an external one. Moreover, as seen in the evolution of the Internet, it is hard to control the independent deployment of technologies, and we are likely to have the same issues in building and scaling up a quantum internet. It can fairly be assumed that various organizations will utilize different technologies, and we need to deal with combining them one day. Work has been done on heterogeneous networks for quantum key distribution (QKD) networks, where they use satellites and field fiber for constructing such links <cit.>. Therefore, addressing the heterogeneity of quantum link architectures is essential. Existing work on quantum link architectures and quantum repeater networks has studied homogeneous paths <cit.> and heterogeneous paths <cit.>. However, the work on heterogeneous paths did not investigate the consequences of introducing heterogeneity into the link architectures used in a quantum internet, especially involving link architectures utilizing different combinations of optical components. In this research, we evaluate the performance of the heterogeneous networks consisting of Memory-Interference-Memory (MIM) and Memory-Source-Memory (MSM) links, which are quantum link architectures that have difference in the combinations of optical components. In the former link architecture, photons flow from the nodes toward a device in the middle of the link, and in the latter architecture, photons flow from a device in the middle towards the nodes. Here, heterogeneity regarding all-photonic repeaters and other quantum links that do not specifically use quantum memory is outside our scope and left for future work. We conduct simulations utilizing the quantum internet simulator QuISP <cit.>. In order to test the MSM link fairly against the MIM link, we extend the analysis on the MSM link to consider the entangled photon pair pulse rate. Our simulations show that increasing the pulse rate does not necessarily increase the Bell pair generation rate and can even decrease it. Furthermore, we perform simulations to show that heterogeneous networks can successfully implement entanglement swapping. Performance is highly dependent on link configuration, but the generation rate does not significantly change compared to homogeneous links. The insights we have gained from these simulations — such as excessive EPPS pulse rate resulting in less performance in MSM links, and worst-link dependency of the performance of heterogeneous links, are the behaviors we likely will observe in real quantum networks as technology evolves. The ability to thrive in diverse cases is a hallmark of a robust architecture, and we believe it to be crucial for developing a scalable and robust quantum internetwork. § PRELIMINARIES This section presents an overview of the essential background required to comprehend this paper and the specifications for our quantum internet simulator, QuISP. §.§ Quantum networks In both quantum and classical communication, we utilize light to send signals. However, light propagating through optical fiber is subject to attenuation, even in ideal conditions. Therefore, in classical communication, it is conventional to copy, amplify, and resend the data using a repeater to reach the desired location without losing any information. On the other hand, such an approach is impossible when dealing with quantum information due to the no-cloning theorem <cit.>. To overcome this issue, quantum repeaters <cit.> divide a long end-to-end channel that is unlikely to distribute a Bell pair into reasonably short links and apply entanglement swapping <cit.> to establish an end-to-end Bell pair. Consider an example where we have three quantum network nodes, which we label as A, B, and C. Here, A, B, and C are nodes equipped with quantum memories. We share a link-level entanglement in the quantum state |Φ^+⟩_AB_1=(|00⟩+|11⟩)/√(2) between A and B, and |Φ^+⟩_B_2C=(|00⟩+|11⟩)/√(2) between B and C. Under this condition, B measures the two qubits it owns on the Bell state basis. Then, B sends the measurement result via a classical channel. Once A and C receive this message, they apply corresponding Pauli operations to collapse the quantum state into a predetermined entangled state. This results in sharing a Bell pair among A and C. §.§ Quantum link architectures Above, we described entanglement swapping using memory-based operations, which is a deterministic operation. In many link architectures, we utilize optical entanglement swapping, via the inherently probabilistic photonic Bell state measurement (BSM). This suggests that if linear optical components are used in this link-level entanglement generation, even trying to exclude every possible noise/loss source, there is an ideal probability of failing at 50% <cit.>. §.§.§ Memory-Interference-Memory (MIM) link One way to create distant node entanglement is by using a Bell state analyzer (BSA), placing it between two nodes, and sending back the results of the BSM to the local nodes, as shown in Fig. <ref>. With these results, the nodes determine whether to discard the corresponding qubit and, if they keep it, whether to apply the quantum operation to correct the state so that we result in one specific Bell pair, |Φ^+⟩ state. This work was originally proposed in <cit.>. §.§.§ Memory-Memory (MM) link In the Memory-Memory link architecture <cit.>, the essential components are more or less the same as the MIM link, but the BSA is moved inside one of the nodes, as shown in Fig. <ref>. The side equipped with a BSA can make immediate decisions on whether to keep a memory locked or to free it since it can quickly assess the success or failure of a BSM. However, in MM, where we interpret it as a link architecture sending photons from one side to another, optical BSA is not the only possible technology as one might utilize absorptive memories <cit.> or other various technologies <cit.>. §.§.§ Memory-Source-Memory (MSM) link Another way to create distant node entanglement is by inserting an entangled photon pair source (EPPS), as shown in Fig. <ref>. This was first introduced in <cit.>. In our definition, the MSM link architecture is partially similar to the MM link architecture, as the nodes can make efficient decisions and allow such nodes to be on both sides using an EPPS, but the overall success rate is lower since two optical BSAs are used, rather than just one. Previous research have laid out concrete protocols to fully utilize the theoretical predictions made in <cit.> regarding MSM link architecture <cit.>. Using this protocol, they perform simulations on a single link and compares its behavior against the conventional MIM protocol. They observe that the MSM protocol performs much more efficiently in cases where there are sufficient amounts of quantum memories. However, a saturation effect was also observed: even if the number of memories in our node increases, it inevitably reaches a limit at which the entanglement generation rate does not improve. Let L denote the distance between the EPPS and the node, c_fiber represent the speed of light in the fiber, p_success indicate the probability of a local node succeeding in BSM, and f_EPPS represent the EPPS pulse rate. By denoting the number of quantum memories as 𝒩, the saturation effect is observed as not providing any speedup in Bell pair generation time where 𝒩≥⌈2L/c_fiberp_successf_EPPS⌉. § TUNING MSM LINK PERFORMANCE Previously, Soon et al. considered the MSM link architecture to have a fixed EPPS pulse rate <cit.>. However, the EPPS pulse rate in MSM links was not discussed in detail and was arbitrarily fixed to match the BSA detection rate. Here, we will adjust the pulse rate to be close to the optimal value with the following procedures. First, the EPPS collects the number of memories and the success probability of Bell state measurements for each node. From these values, we determine the optimal rate of entangled photon pair emissions, taking into account (<ref>). However, we also need to consider the BSA detection repetition limit. This depends on the detector recovery time and classical electronics in the BSA. The optimal EPPS pulse rate can then be derived as f_EPPS = min(⌈𝒩_left c_fiber/2 p_left L_left⌉,⌈𝒩_right c_fiber/2 p_right L_right⌉,f_BSA), where f_BSA is the maximum detection rate of the BSA, 𝒩 is the number of memories, p is the success probability, and L is the distance from the node to the EPPS with subscripts denoting the position of the node with respect to the EPPS. This optimal rate also accounts for cases where the EPPS is situated in an imbalanced location. Our simulation results will show the differences between the adaptive and non-adaptive versions of MSM links. § HETEROGENEOUS NETWORKS In classical networks, the layered protocol model has enabled independent development of each layer <cit.>. Thanks to the abstraction provided by such architecture, heterogeneity in utilizing inherently different technologies naturally arose, and most network engineers above the physical layer can ignore this difference when operating such networks and interconnected networks. In the quantum case, there are many different levels of heterogeneity, both at the hardware level, where various technologies are used to implement qubits and quantum link architectures and at the software level, where different error correction techniques are utilized. We focus on hardware-level heterogeneity between link architectures with different underlying technologies, namely, the MIM and MSM links. Although these two link types are considered significant components of establishing a quantum network, the behavior and performance of what happens when we mix MIM and MSM links, or the heterogeneity of these two links, is still not addressed. Therefore, this work analyzes end-to-end Bell pair generation performance of a network composed of quantum networks. This aims to create a genuinely abstracted, robust quantum network of quantum networks, a quantum internet. We now consider the two major differences between these two types of links. First, the generation time differs. This is natural because different mechanisms of entanglement generation are employed. Second, the way stationary qubits are consumed in the memory is also different. In MIM, once the entire memory of the node has finished emitting the photons, the BSA sends back an array of BSM results, and then the local nodes apply the corresponding correction operations. Similar procedure is done for MSM, but the BSM results are sent back individually for each photon pair instead of a batch. Though there might be some other interesting networks that can be discussed in the sense of heterogeneity, to narrow down our scope, we limit them to the following two networks; (1) a simple entanglement swapping network consisting of two-hop nodes where we freely swap the link architectures, and (2) a network chain consisting mainly of MSM links but with a single MIM link inserted. § METHODS In order to determine the performance of heterogeneous networks, we utilize the quantum internet simulator QuISP <cit.>. §.§ QuISP specific clarifications In our simulation model, the quantum repeater nodes have a Quantum Network Interface Card (QNIC) with 𝒩 memories for each adjacent link. Therefore, the node acting as a repeater has a total of 2𝒩 memories, where CNOT gates can be applied between any pair of qubits even if they reside in different QNIC, as shown in Fig. <ref>. In QuISP and in the testbed network we are constructing, the order of entanglement swapping depends on what we specify in the RuleSet <cit.>, which provides a comprehensive definition of a set of commands that instruct what each node should do, depending only on local events and message arrival. Many approaches to sequencing entanglement swapping are possible. The default entanglement swapping scheme used in QuISP is by a bisection method, where the end-to-end path is recursively split in the middle to maximize the parallelization of swaps at every time step. This swapping method takes 𝒪(logN) timesteps, where N is the number of nodes. §.§ Default configurations for our simulations Here, we lay out the default configurations for the parameters our links have in common across all simulations. The major parameters are set as follows. * Attenuation rate in the optical fiber is 0.2 dB/km. * Speed of light in optical fiber is defined by c_fiber = 208189km/s. * BSA success probability is defined by p_BSA=0.5 (optimal value). * EPPS pulse rate is defined by f_EPPS=1MHz (for non-adaptive MSM protocols). * BSA detection rate is defined by f_BSA=1MHz. The EPPS pulse rate and the BSA detection rate were selected in order to keep the execution time of the simulations feasible. The support nodes (EPPS or BSA) are placed right in the middle between adjacent nodes. In our simulation, we measure the time to generate 100 Bell pairs, utilizing the Bell pair generation rate (BP/s) as our metric. These Bell pairs are generated between the network's two end nodes. However, discrepancies arise in the recorded time required to generate 100 Bell pairs due to local information at each end node. Each end node marks the total Bell pair generation time upon receiving the corresponding classical messages. However, in reality, a Bell pair is genuinely shared between the remote nodes only when both have received their respective classical messages. Therefore, we record the later timing between the two nodes for Bell pair generation time. We performed simulations 100 times for each simulation instance, and the error bars of the figures represent the standard deviation of the data, barely visible in most cases. §.§ Outline of our simulations We focused on the following situations, in order to test the adaptive frequency of the MSM link and the heterogeneity introduced into the quantum network. First, we conducted an experiment comparing the performance of adaptive MSM links, non-adaptive MSM links, and conventional MIM links, varying the number of memories in each QNIC (Experiment 1). Second, we conducted another experiment comparing the performance of a two-hop network with homogeneous and heterogeneous links, as illustrated in Fig. <ref>, varying the number of memories in each QNIC (Experiment 2). Finally, we conducted an experiment to investigate whether replacing segments of a homogeneous path with different link architectures, thereby creating a heterogeneous path as depicted in Fig. <ref>, affects performance (Experiment 3). While our simulations focus on these three instances, they lay the groundwork for adaptive MSM links and heterogeneous networks. This framework provides a foundation for future investigations, such as integrating multiplexing of links into the networks or exploring heterogeneity among an even wider array of link architectures, to build upon our findings. § RESULTS In this section, we present our observations and explanations for the simulation cases we have listed above. §.§ Experiment 1: Optimizing single MSM link performance First, we conducted a simulation to investigate the effects resulting from alterations in the EPPS pulse rate, as outlined in equation (<ref>). We adjusted the number of memories within the QNICs, keeping the same node separation distances set at either 1km or 20km. The results are shown in Figs. <ref> (1km) and <ref> (20km). §.§.§ MIM vs. MSM, Adaptive MSM We can observe that for MIM links, the performance increases linearly with the number of quantum memories. This relationship stems from the direct dependence of MIM link performance on the number of photons emitted in each trial, which in turn is constrained solely by the number of memories available. In contrast, the performance improvement in MSM links (both adaptive and non-adaptive) is not linear. Unlike MIM links, MSM links are constrained by the EPPS pulse rates governing the number of photons per trial, with memory count at end nodes representing another capacity limit for Bell pair distribution if the number of memories is lower. Furthermore, we observe a saturation effect in both adaptive and non-adaptive protocols, where the generation rate plateaus beyond a certain threshold, denoted as 𝒩. Even in the adaptive scenario, where we regulate the EPPS pulse rate to prevent surpassing the BSA detection rate, this saturation phenomenon persists. §.§.§ MSM vs Adaptive MSM When the node-to-node distance is set to 1km, opting for the adaptive pulse rate leads to a decrease in the Bell pair generation rate. Notably, the saturation effect becomes evident with the increase in the number of quantum memories. However, the distinction between adaptive and non-adaptive MSM links is statistically insignificant, thus employing the adaptive protocol on short links appears to have minimal impact. However, when extending the node-to-node distance to 20km, the adaptive protocol demonstrates a significant improvement, achieving an order of magnitude improvement over the non-adaptive counterpart. To calculate the predicted optimal adaptive EPPS pulse rate (f_EPPS) for a memory size of 𝒩=1, we use the formula p_success=p_BSA p_fiber, where p_fiber=e^-L/L_0 with attenuation length L_0=21km and L=10km, yielding p_success=0.3106 and f_EPPS≃ 33517Hz. In contrast, for a non-adaptive MSM link, f_EPPS was set to 1MHz. Surprisingly, despite reducing the EPPS pulse rate by a factor of 30, we observed an enhancement in overall entanglement generation. This counterintuitive phenomenon is further demonstrated in Fig. <ref>. Here, the red line represents entangled photon pairs, the blue lines represent classical communication, and the circles and crosses represent the local BSM success and failure. Let the two nodes of interest be node A and node B, and consider the case where A has succeeded and B failed. In this case, the photon pair that A has latched is already invalid, but we do not know that until we receive the classical message from B. While waiting for that classical message, there will be m events of photon emission trials from the EPPS, m = ⌊ f_EPPS2L/c_fiber⌋. With this m, the probability that we observe at least one success in B after its failure on the photon pair which succeeded on A is denoted by p_≥ 1, where p_≥ 1 = 1-(1-p_success)^m. However, the photon pair that B has latched is invalid, as observed in the initial case. It is natural to think that this phenomenon occurs repeatedly. Hence, when the probability p_≥ 1 is excessively high, we risk encountering a recurring obstacle where this phenomenon continues. Thus, establishing link-level entanglement could become time-consuming, particularly when p_≥ 1 approaches one. Therefore, ensuring that this probability remains sufficiently far from one is crucial for facilitating efficient entanglement generation in low-memory count MSM links. To validate our analysis, we substitute the parameters for the 20km link to see the values of these probabilities. Take note that in this case (where the node distance is set to be 20km), the success probability for each BSM is obtained as p_success=0.31. Calculating the probability p_≥ 1 for f_EPPS=1MHz and m=96, we obtain asymptotically p_≥ 1=1, and for f_EPPS=33517Hz and m=3, we obtain p_≥ 1≃ 0.67. This analysis suggests that having excessive EPPS pulse rate can decrease the entanglement generation rate, thereby providing a clear explanation for the observed behavior. §.§ Experiment 2: Two Hops We now analyze two-hop networks where entanglement swapping comes into action by comparing networks that include both MSM and MIM links (heterogeneous) against those that consist solely of MSM or MIM links (homogeneous), as shown in Fig. <ref>. Within the MSM links, there are two variations; adaptive and non-adaptive. We have conducted simulations on these three networks by varying the number of memories in each QNIC and the distance between each node. The results are shown in Figs. <ref> (1km) and <ref> (20km). This result shows that generating Bell pairs through a network under a heterogeneous link architecture is feasible without any large drop in the overall Bell pair generation rate. We observe that the performance of heterogeneous networks (MIM with MSM) consistently falls between the extremes exhibited by MIM-only and MSM-only configurations (both adaptive and non-adaptive). While MIM-only performance excels in certain scenarios and falters in others, MSM-only performance follows a similar pattern but in the opposite direction. This trend holds across most of our simulations, and cases, where this effect is absent, can generally be dismissed as statistically insignificant due to all schemes having similar performance. We can explain why we observe this as follows. First, the generation rate for heterogeneous links depends mainly on the slower link. On one side, where there are fast links, seen from the slow link side, the nodes on the fast link side can freely use entanglement on demand. We can think of the conditional probability of overall success depending on p where p is the success probability of the slow link. This means the generation rate can be calculated as 1/(t_slow link + t_ES delay), where t_slow link is the time to generate a single Bell pair for the slower link, and t_ES delay is the time lag introduced due to entanglement swapping. If a slow link is connected with a slow link or a fast link with a fast link, the conditional probability of overall success will not be equal to p. This suggests that the overall generation rate is reasonably slower than the generation rate of a single link. If we take notice to the results we have shown in experiment one, we can tell that the interpretation above was correct for most of the cases. This result is in accordance with the impact of having a weak link in a chain <cit.>, which addressed that the performance of a heterogeneous path was essentially limited by the worst link. §.§ Experiment 3: Longer Paths Next, we would like to investigate whether there will be a difference in entanglement generation when replacing different parts of a homogeneous path with one different link, which makes it a heterogeneous path. The network is shown in Fig. <ref>, and we replace links zero, one, two, three, or four. In our simulation cases, the majority link is MSM, and the minority link is MIM. We have conducted simulations on different numbers of memories for different link lengths. The x-axis indicates which link from Fig. <ref> is replaced by the minority link type. The results are shown in Figs. <ref> (1km, four memories), <ref> (1km, 32 memories), <ref> (20km, 4 memories), and <ref> (20km, 32 memories). Within both homogeneous and heterogeneous cases, most exhibit the same or similar entanglement generation rate. However, for instances where there is a significant difference in the performance between single MIM links and single MSM links (specifically, for cases where 𝒩=4, node distance 1km and 𝒩=32, node distance 20km, as also seen in experiment one), replacing link zero leads to a change in the overall entanglement generation rate. In this case, when MIM is relatively slower than MSM, the entire generation rate also decreases; we see a slight improvement when MIM is fast. Regarding link four, there is a slight decrease in the generation rate for instances where MIM is slower. From Fig. <ref>, we can see that some repeater nodes consume their link Bell pairs earlier in the entanglement swapping process, whereas some other nodes, especially the end nodes, need to keep their Bell pair for a fairly long time. This means that the repeater nodes can generate link-level Bell pairs while waiting for the generation of end-to-end Bell pairs, but the end nodes do not have that buffer time. This result suggests that improving the entanglement generation rate at the end node and other links where the “load” is large, even for homogeneous networks, is likely required for efficient overall Bell pair generation under our entanglement swapping method. In our case, the critical links were link zero. We observed a similar behavior in link four as well. We can claim that even if we use methods other than the bisect method to schedule the entanglement swapping, the links with a large load still require an efficient link-level Bell pair. § CONCLUSION We performed several numerical simulations addressing MSM and MIM link heterogeneity, using the quantum internet simulator QuISP. We have first extended our previous work on the MSM link by observing the effect of adjusting the EPPS pulse rate and compared it to our previous results. We can notice that the performance of a link strongly depends on the selection of link architecture, and the given number of quantum memories at each node. We observed that link-level Bell pair generation rate can be seriously degraded for MSM link when the EPPS pulse rate is set at excessively high rate, especially for repeaters with low number of quantum memories. We introduced an empirical model that describes why this phenomenon occurs, which strongly depends on the EPPS pulse rate. In the near future, we will have not many quantum memories, where this effect becomes significant. Therefore, adjusting this part was crucial. Furthermore, we performed several simulations addressing MSM and MIM link heterogeneity. These quantum link architectures utilize different combinations of optical components. We demonstrated that we need not do anything special to generate Bell pairs even in the case of heterogeneous networks, and we did not even observe a significant drop in execution time in comparison with MIM-only, MSM-only homogeneous networks. We introduced a model based on our results, which show that the end-to-end entanglement generation rate strongly depends on the link where it is particularly slow, which depends on the link architecture and the capacity of quantum memories. Finally, we tested a network consisting of 10 nodes, where the majority link architecture was set to be an MSM link, a minority link as an MIM link, and observed the difference in relocating the MIM link through the network. We observed that the performance changed when we replaced the end node links and some other critical points in the path. We described this as a phenomenon dependent on our entanglement swapping order. If we keep holding the quantum memory, we do not have as much buffer as those who release their memories sooner in the process of entanglement swapping. Thus, it was natural to observe that replacing the MSM link connected to an end node with an MIM link, especially under conditions where the MIM link was relatively slower than the MSM link, resulted in an overall decrease in performance. Such data provides insight in optimizing the entanglement swapping sequence, as our simulator is based on the RuleSet architecture. In analyzing our results, we noticed that the link generation time was the key factor in our observation of such behavior. This suggests that heterogeneity among memory-based quantum link architectures is still an issue to be addressed, but it does not cause serious generation time loss and is completely feasible. Our work did not consider multiplexing or heterogeneity in non-memory-based link architectures, and the network configurations were limited to simple cases. We leave heterogeneity between memory-equipped and memoryless quantum repeaters as future work. The issue of heterogeneity must be a design goal of a quantum internet architecture from the beginning; acceptable behavior must be designed in from the beginning. This work provides a first look at an expected form of heterogeneity, but we can also expect the unexpected, the introduction of still more previously unknown, heterogeneous technologies over time. We believe tolerance and robustness to be crucial to deploying a fully scalable quantum internetwork. § CODE AVAILABILITY The code we used to obtain the results can be found in the GitHub repository for QuISP, under the branch <https://github.com/sfc-aqua/quisp/tree/heterogeneous-msmmim>. § ACKNOWLEDGMENT We thank Akihito Soeda, Bernard Ousmane Sane and Amin Taherkhani for valuable discussions. Grammarly was used to enhance the text quality. IEEEtran.bst
http://arxiv.org/abs/2405.09236v1
20240515103142
Roots in the semiring of finite deterministic dynamical systems
[ "François Doré", "Kévin Perrot", "Antonio E. Porreca", "Sara Riva", "Marius Rolland" ]
cs.DM
[ "cs.DM", "math.DS" ]
Recent Cold QCD Results from STAR Ting Lin for the STAR Collaboration ========================================= Finite discrete-time dynamical systems (FDDS) model phenomena that evolve deterministically in discrete time. It is possible to define sum and product operations on these systems (disjoint union and direct product, respectively) giving a commutative semiring. This algebraic structure led to several works employing polynomial equations to model hypotheses on phenomena modelled using FDDS. To solve these equations, algorithms for performing the division and computing k-th roots are needed. In this paper, we propose two polynomial algorithms for these tasks, under the condition that the result is a connected FDDS. This ultimately leads to an efficient solution to equations of the type AX^k=B for connected X. These results are some of the important final steps for solving more general polynomial equations on FDDS. § INTRODUCTION Finite discrete-time dynamical systems (FDDS) are pairs (X,f) where X is a finite set of states and f: X → X is a transition function (where no ambiguity arises, we will usually denote (X,f) simply as X). These systems emerge from the analysis of concrete models such as Boolean networks <cit.> and are applied to biology <cit.> to represent, for example, genetic regulatory networks or epidemic models. We can find them also in chemistry <cit.>, to represent the evolution over discrete time of chemical reactions, or information theory <cit.>. We can identify dynamical systems with their transition graph, which have uniform outgoing degree one (these are also known as functional digraphs). Their general shape is a collection of cycles with a finite number of directed trees (with arcs pointing towards the root, in-trees) anchored to them by the root. The nodes inside the cycles are periodic states, while the others are transient states. The set (𝔻, +, ×) of FDDS taken up to isomorphism with the disjoint union as a sum operation (corresponding to the alternative execution of two systems) and the direct product <cit.> (corresponding to synchronous execution) is a commutative semiring <cit.>. However, this semiring is not factorial, a FFDS admits, in general, multiple factorizations into irreducibles. For this reason, the structure of product is more complex compared to other semirings such as the natural numbers, and its understanding remains limited. We are still unable to characterize or efficiently detect the FDDS obtained by parallel execution of smaller FDDS. Some literature analyzes this problem limited to periodic behaviours, to FDDS with permutations as their transition function <cit.>. Studying these restricted FDDS is justified by the fact that they correspond to the stable, asymptotic behaviour of the system. However, transient behaviour is more vast and various when modelling phenomena such as those from, for example, biology or physics. FDDS with a single fixed point have also been investigated <cit.> focusing more on the transient behaviours. Nevertheless, we cannot investigate general FDDS through a simple combination of these two techniques. A direction for reducing the complexity of the decomposition problem is finding an efficient algorithm for equations of the form AX = B, for dividing FDDS. The problem is trivially in , but we do not know its exact complexity (e.g., -hard, , or ). However, <cit.> proved that we can solve these equations in polynomial time if A and B are certain classes of permutations, FDDS without transient states. Nevertheless, the complexity of more general cases is unknown even for permutations. Another direction is to propose an efficient algorithm for the computation of roots over FDDS. Since <cit.>, we are aware of the uniqueness of the solution of k-th roots, but once again we do not know the exact complexity of the problem beyond a trivial upper bound. In this paper, we will exploit the notion of unroll introduced in <cit.> to address the division and the root problems in the specific case where X is connected (the graph of X contains just one connected component). More precisely, we start by showing that we can compute in polynomial time a FDDS X such that AX = B, if any exists. We also show that we can compute in polynomial time, given an FDDS A and a strictly positive integer k, a connected FDDS X such that X^k = A, if any exists. These two last contributions naturally lead to a solution to the more general equation AX^k=B. § DEFINITIONS In the following, we will refer to the in-trees constituting the transient behaviour of FDDS just as trees for simplicity. An FDDS has a set of weakly connected components, each containing a unique cycle. In the following, we will refer to FDDS with only one component as connected. In literature, two operations over FDDS have been considered: the sum (the disjoint union of the components of two systems) and the product (direct product <cit.> of their transition graphs). Let us recall that, given two digraphs A = (V, E) and B= (V', E'), their product A× B is a digraph where the set of nodes is V× V' and the set of edges is ((v, v'), (u, u')) | (v, u) ∈ E, (v', u') ∈ E'. When applied to the transition graphs of two connected FDDS with cycle lengths respectively p and p', this operation generates (p,p') components with cycles of length (p,p') <cit.>. Let us recall the notion of unroll of dynamical systems introduced in <cit.>. We will denote trees and forests using bold letters (in lower and upper case respectively) to distinguish them from FDDS. Let A be an FDDS (X,f). For each state u∈ X and k ∈ℕ, we denote by f^-k(u) = { v ∈ X | f^k(v) = u } the set of k-th preimages of u. For each u in a cycle of A, we call the unroll tree of A in u the infinite tree t_u = (V,E) having vertices V = {(s,k) | s ∈ f^-k(u), k ∈ℕ} and edges E = {((v,k),(f(v),k-1) ) }⊆ V^2. We call unroll of A, denoted A, the set of its unroll trees. Unroll trees have exactly one infinite branch on which the trees representing transient behaviour hook and repeat periodically. Remark that the forest given by the unroll of a connected FDDS may contain isomorphic trees and this results from symmetries in the original graph. This transformation from an FDDS to its unroll has already proved successful in studying operations (particularly the product operation) at the level of transient behaviours. Indeed, the sum (disjoint union) of two unrolls corresponds to the unroll of the sum of the FDDS; formally, A+A'= A+A'. For the product, it has been shown that it is possible to define an equivalent product over unrolls for which A×A'= A× A'. Here and in the following, the equality sign will denote graph isomorphism. Let us formally define the product of trees to be applied over the unroll of two FDDS. Since it is known that the product distributes over the different trees of the two unrolls <cit.>, it suffices to define the product between two trees. Intuitively, this product is the direct product applied layer by layer. To define it, we let v be the distance of the node from the root of the tree. Consider two trees t_1=(V_1,E_1) andt_2=(V_2,E_2) with roots r_1 and r_2, respectively. Their product is the tree t_1 ×t_2=(V,E) such that V=(v,u)∈ V_1× V_2 |v=u and E=((v,u),(v',u')) | (v,u)∈ V, (v,v')∈ E_1, (u,u')∈ E_2. In the following, we use a total order ≤ on finite trees introduced in <cit.>, which is compatible with the product, that is, if t_1 ≤t_2 then t_1 t≤t_2 t for all tree t. Let us briefly recall that this ordering is based on a vector obtained from concatenating the incoming degrees of nodes visited through a BFS. During graph traversal, child nodes (preimages in our case), are sorted recursively according to this very order, resulting in a deterministic computation of the vector. We will also need the notion of depth for finite trees and forests. The depth of a finite tree is the length of its longest branch. For a forest, it is the maximum depth of its trees. In the case of unrolls, which have infinite paths, we can adopt the notion of depth of a dynamical system (that is, the largest depth among the trees rooted in one of its periodic states). For an unroll tree t, its depth is the depth of a connected FDDS A such that t∈A. See Figure <ref>. fig/unroll We now recall three operations defined in <cit.> that will be useful later. Given a forest f, we denote by F the multi-set of trees rooted in the predecessors of the roots of F. Then, we denote by f the tree such that f = F. Intuitively, this second operation connects the trees to a new common root. Finally, given a positive integer k, we denote tk the induced sub-tree of t composed by the vertices with a depth less or equal to k. Let us generalize the same operation applied to a forest f = t_1 + ... + t_n as Fk = t_1k + ... + t_nk. § COMPLEXITY OF FDDS DIVISION WITH CONNECTED QUOTIENT In this section we establish an upper bound to the complexity of division over FDDS. More formally, our problem is to decide if, given two FDDS A and B, there exists a connected FDDS X such that AX=B. To achieve this, we will initially prove that cancellation holds over unrolls, that EX = EY implies X = Y for unrolls E,X,Y. Later, we will extend the algorithm proposed in <cit.> to handle more general unrolls (rather than just those consisting of a single tree), ultimately leading to our result. We begin by considering the case of forests containing a finite number of finite trees; we will refer to them as finite tree forests. We will later generalise the reasoning to forests such as unrolls. Let A, X, and B be finite tree forests. Then, AX = B if and only if Ax = B. (⇐) Assume AX = b. Then,Ax = b = B. Moreover, since A and X are finite trees, by <cit.> we have: Ax = Ax = AX. (⇒) We can show the other direction by a similar reasoning. Thanks to this lemma, we can generalise Lemma 21 of <cit.> as follows. Let A, X, and Y be finite tree forests. Then AX = AY if and only if XA = YA. (⇐) By the definition of tree product, all nodes of x (resp., y) of depth larger than a do not impact the product AX (resp., AY). Thus, we have AX = AXA and AY = AYA. Since XA = YA, we conclude that AX = AY. (⇒) Suppose AX = AY. By Lemma <ref>, we have Ax = Ay. Since A, X, and y are finite trees, we deduce <cit.> xdepth(a) = ydepth(a) For all forest F and d>0, we have that 𝒟(Fd) is the multiset containing the subtrees rooted on the predecessors of the roots of Fd. It is therefore the same multiset as that which is composed of the subtrees rooted on the predecessors of the roots of F cut at depth d-1. It follows that Fd-1 = Fd. In particular, for F = X and d = A + 1 = A, we have XA = XA. Likewise, ya = YA. By applying · to both sides of (<ref>), we conclude XA = YA. Lemma <ref> is a sort of cancellation property subject to a depth condition. The first step to prove cancellation over unrolls is proving the equivalence between the notion of divisibility of unrolls and divisibility over deep enough finite cuts. Let A, X, and B be FDDS with α equal to the number of unroll trees of B. Let n ≥α + B. Then AX = B if and only if AnXn = Bn To prove Proposition <ref>, we can apply the same reasoning of <cit.> (see appendix for the details of the proof). We remark that the cut operation over B at a depth n generates a forest where the size of each tree is in m^2 and the total size is in m^3 with m the size of B (the number of nodes), since the chosen n is at most m. Now, we can prove the main result of this section. For unrolls A, X, Y we have AX = AY if and only if X = Y. Let α be the number of trees in AX and n ≥α + ax be an integer. By Proposition <ref>, AX = AY if and only if AnXn = Anyn. In addition, by Lemma <ref>, Anxn = anyn if and only if xn = yn. By Proposition <ref>, the theorem follows. Let us introduce the notion of periodic pattern of an unroll tree. Recall that an unroll tree t has exactly one infinite branch on which the trees (t_0,t_1,…) representing transient behaviour hook and repeat periodically. Let p be a positive integer. A periodic pattern with period p of t is a sequence of p finite trees (t_0,…,t_p-1) rooted on the infinite branch such that, for all i∈ℕ we have t_i=t_i p. Let us point out that the idea here is to obtain a set of trees such that we represent all different behaviours repeating in all unroll trees, obtaining a finite representation. For connected FDDS, since its period p is the number of trees in its unroll, we can reconstruct the FDDS itself from a periodic pattern (t_0,…,t_p-1) of one of its unroll trees t_u by adding edges between t_i and t_(i+1) p for all i. We call this operation the roll of t_u of period p. The following lemma shows that we can recover the periodic pattern of an unroll tree from a deep enough cut. Let A be a connected FDDS of period p, t be an unroll tree of A, and n ≥ p + A. Let (v_n,…,v_0) be a directed path in tn such that v_n=n and v_0 is the root of the tree. Then, nodes v_p,…,v_0 necessarily come from the infinite branch of t. We assume, by contradiction, that at least one of the nodes v_p,…,v_0 does not come from the infinite branch of t. Let v_a be the node of (v_p-1,…,v_0) with maximal depth coming from the infinite branch of t; there always is at least one of them, namely the root v_0. We have v_n≤v_a+t. However, we assumed v_a<p, thus v_n<p+t. Since, v_n=n, we have n<p+t=p+A which is a contradiction. We can finally describe a division algorithm for FDDS working under the hypothesis that the quotient is connected. Given two FDDS A and B, where B has α trees, we can compute X such that X is a connected FDDS and A X = B (if any exists) by * cutting A and B at depth n = α + B * computing x with the division algorithm <cit.> to divide the trees Bn by An * computing the connected FDDS X as the roll of period p of any tree of x, where p is equal to the number of trees in x * and verifying if X multiplied by A is isomorphic to B. Since the depth where we cut is large enough, Proposition <ref>, Lemma <ref> and the correctness of the division algorithm of <cit.> imply that the tree x computed in Step <ref> of Algorithm <ref> satisfies Anx = Bn. By the definition of unroll, since we only search for connected FDDS, if x is the cut of an unroll then the rolls of each tree of x at period p are isomorphic. Furthermore, Lemma <ref> ensures that we can roll each tree in x. However, x is not necessarily the cut of an unroll and it is possible that there exists an FDDS X such that x = Xn but AX ≠ B (an example can be seen in Figure <ref>). As a consequence, Step <ref> of Algorithm <ref> is mandatory to ensure its correctness. fig/unroll_counter_example Algorithm <ref> runs in m^9 time, where m is the size of its inputs. The cuts of depth n of the unrolls of A and B can be computed in m^3 time and the size of the result is m^3. In fact, we can construct An and Bn backwards from their roots up to depth n; the size of An is bounded by the size of Bn, which is m^3. By analysing the division tree algorithm in Figure 6 of <cit.>, we can check that it can be executed in cubic time. Moreover, since its inputs have size m^3, step <ref> of Algorithm <ref> requires m^9 time. The roll procedure of a tree can be computed by a traversal, requiring m^2 time. Finally, the product of two FDDS is quadratic-time on its input but linear-time on its output. However, in our case, the size of the output of AX is bounded by the size of B; hence, the product can be computed in m time. Finally, the isomorphism test requires m <cit.>. § COMPLEXITY OF COMPUTING K-TH ROOTS OF UNROLLS The purpose of this section is to study the problem of computing connected roots on FDDS, particularly on transients. Let A = t_1 + … + t_n be a forest and k a positive integer. Then A^k = ∑_k_1+…+k_n = kkk_1,…,k_n∏_i=1^nt_i^k_i; furthermore, since the sum of forests is their disjoint union, each forest (in particular A^k) can be written as a sum of trees in a unique way (up to reordering of the terms). The injectivity of k-th roots, in the semiring of unrolls, has been proved in <cit.>. Here, we study this problem from an algorithmic and complexity point of view, and find a polynomial-time upper bound for the computation of k-th roots. We begin by studying the structure of a forest of finite trees raised to the k-th power. Indeed, if we suppose X = t_1 + … +t_n with t_i ≤t_i+1, we want to be able to identify the smallest tree of X^k from the product t_i ×∏_j=1^nt_j. Moreover, we want to be able to identify it for all t_i. Hereafter, we consider a^0 to be equivalent to the simple oriented path with length equivalent to the depth of a (the same is true for forests). Let X be a forest of the form X=t_1+…+t_n (with t_i≤t_i+1) and k a positive integer. For any tree t_i of depth d_i in X, the smallest tree t_s of depth d_i with factor t_i in X^k is isomorphic to t_m^k-1t_i, where t_m is the smallest tree of X with depth at least d_i. Let us assume that the smallest tree t_s of depth d_i with factor t_i in X^k is not isomorphic to t_m^k-1t_i. Two cases are possible. Either t_s contains a third factor other than t_m and t_i, or it is of the form t_m^k-k_it_i^k_i, with k_i>1. In the former case, let us suppose that there exists a ∈{1,…,i-1}∖{m} and k_a > 0 such that t_a ≠t_m and t_s is isomorphic to t_i^k_it_a^k_at_m^k_m. Remark that, according to <cit.>, the smallest tree of depth d_i with factor t_i in X^k necessarily has all its factors of depth at least d_i. For this reason, we can assume t_a≥ d_i without loss of generality. However, since t_m < t_a, we have t_m^k_m + 1t_a^k_a - 1 < t_m^k_mt_a^k_a. Thus, we have that t_i^k_it_m^k_m + 1t_a^k_a - 1 < t_i^k_it_m^k_mt_a^k_a. This brings us into contradiction with the minimality of t_s. In the second case, we assume that t_s is isomorphic to t_m^k-k_it_i^k_i with k_i > 1. By hypothesis, we have t_i ≥t_m. If we consider the case of t_m < t_i, we have t_m^k-k_i + 1t_i^k_i -1 < t_m^k-k_it_i^k_i. Once again, this is in contradiction with the minimality of t_s. In the case of t_m = t_i, we have t_m^k - k_i +1t_i^k_i -1 = t_m^k - k_it_i^k_i. But we supposed t_s not isomorphic to t_m^k-1t_i. This concludes the proof. Before describing an algorithmic technique for computing roots over unrolls (forests), we need a last technical lemma. Let x and a be two finite trees such that x^k = a, and k a positive integer. Then, a = x^k. Since x is a tree, for all i ≤ k, x^i is also a tree. According to <cit.>, we have a = x^k = x^k. We now introduce an algorithmic procedure to compute the roots over forests based on an induction over decreasing depths in which, each time, we reconstruct part of the solution considering the smallest tree with at least a specific depth (according to Lemmas <ref> and <ref>). Given a forest a and k a strictly positive integer, we can compute x such that x^k = a with Algorithm <ref>. In Algorithm <ref>, the main idea is to extract iteratively the minimal tree among the tallest ones in A (t_s). This tree will be used to reconstruct one of the trees of X (t_i). This can be done in two ways according to two possible scenarios. In the first case, t_s is smaller than the smallest one already reconstructed (t_m) raised to the power k. If this is the case, we compute a new tree in X through a recursive call to our function. In the second case, the extracted tree is greater than t_m^k. This means, by Lemma <ref>, that it is a product of the smallest reconstructed (t_m) one and a new one (t_i). In this case, the latter can be computed by the algorithm of <cit.>. After the reconstruction of a tree t_i of X, we remove from A all the trees obtainable from products of already computed trees in X. This allows us to extract progressively shorter trees t_s from A and to compute consequently shorter trees of X. When we remove all trees in A obtainable from trees t_i with depth at least d_i in X, this leaves us only trees with depth at most d_i. Since for each depth, the number of trees of this depth is finite, the algorithm necessarily halts. Let us consider an example. In Figure <ref>, in order to compute the left side from the right one, the first tree considered is t_1^2, the single tallest one. The latter can be used to compute t_1 recursively. Next, the smallest one among the remaining ones is t_0^2, which is smaller than t_1^2. Thus, we can compute t_0 again through recursion. Finally, the last tree extracted, after removing the trees with exclusively t_0 and t_1 as factors, is t_0t_2. Since this time t_0^2 is smaller, we can get the third and final tree t_2 by dividing it by the smallest computed tree yet. fig/root_intuition_decreasing_depth Algorithm <ref> runs in m^3 time if k is at most ⌊log_2 m ⌋, where m is the size of a. If A is a path, then the algorithm halts in linear time m on line <ref>. Otherwise, there exists a level i of A containing β≥ 2 nodes. In order to justify the upper bound on k, suppose R^k = A. Then, level i of R contains √(β) nodes. The smallest integer greater than 1 having a k-th root is 2^k, thus β≥ 2^k. Since β≤ m, we have k ≤⌊log_2 m ⌋. Lines <ref>–<ref> take linear time m. The while loop of lines <ref>–<ref> is executed, in the worst case, once per tree of the k-th root R, a number of times equal to the k-th root of the number of trees in A. Line <ref> takes 1 time. The product of trees requires linear time in its output size. Consequently, R^k can be computed in m log m time. Moreover, since we can remove R^k from A in quadratic time, we deduce that line <ref> takes m^2 time. Since the search of t_s consists of a simple traversal, we deduce that line <ref> takes m time. Line <ref> takes m log m time for computing t_m^k. If no recursive call is made, line <ref> is executed in time m_s^3, where m_s is the size of t_s. The runtime of lines <ref>–<ref> is dominated by line <ref>, which takes m^2 as line <ref>. Since each tree t_s in A is used at most once, we have ∑ m_s^3 ≤ m^3; as a consequence, the most expensive lines of the algorithm (namely, <ref>, <ref>, and <ref>) have a total runtime of m^3 across all iterations of the while loop. We still need to take into account the recursive calls of line <ref>. By taking once again into account the bound ∑ m_s^3 ≤ m^3, the total runtime of these recursive calls is also m^3. We conclude that Algorithm <ref> runs in time m^3. Let A be a forest. Then, it is possible to decide in polynomial time if there exists a forest X and an integer k>1 such that X^k=A. Since k is bounded by the logarithm of the size of a and, according to Theorem <ref>, we can compute the root in m^3, we can test all k (up to the bound) and check if there exists a X such that x^k = a in m^3 log m, where m denotes once again the size of a. According to Corollary <ref>, we easily conclude that the corresponding enumeration problem of finding all solutions X (for all powers k) is in , since the verification of a solution can be done in polynomial time and the size of a solution is polynomial in the size of the input. Moreover, the problem is in the class since the time elapsed between the computation of one solution (for a certain k) and the next is polynomial. We refer the reader to <cit.> for more information about enumeration complexity classes. Now that we have a technique to compute the root of forests, let us think in terms of unrolls of FDDS. Consider an FDDS A and its unroll A=A. According to Proposition <ref>, we can compute the FDDS X such that A=X^k by considering the forest F=An where α is the number of trees in f and n=α+A. Again, this depth allows us to ensure that all the transient dynamics of the dynamical system are represented in the different trees. Applying the root algorithm on F, we obtain the result of the root as a forest of finite trees. However, this is just a candidate solution for the corresponding problem over the initial FDDS (for the same reasoning as in the case of division). In order to test the result of the root algorithm, as before, we realised the roll of one tree in the solution to period p with p as the number of trees in the result. Then to decide if X is truly the k-th root of A, we verify if X^k=A where X is the result of the roll operation. That is possible because the algorithm is designed to study connected solutions. Indeed, the following holds. Let A be a FDDS, it is possible to decide if there exists a connected FDDS X and an integer k>1 such that X^k=A in polynomial time. By combining the division algorithm with the root algorithm, we are now able to study equations of the form AX^k = B. Given FDDS A and B and k > 0, we can first compute the result Y of the division of Bn by An, where α is the number of trees in B and n = α + B. Then, we compute the k-th root X of Y. After that, we make the roll of one tree of x in period p, with p the number of trees in X. Then, using the roll result X, we just need to verify if A X^k=B. Once again, the solutions found by this method are only the connected ones, and further non-connected solutions are also possible. § CONCLUSIONS In this article we have proven the cancellation property for products of unrolls and established that the division of FDDSs is polynomial-time when searching for connected quotients only. Furthermore, we have proven that calculating the k-th root of a FDDSs is polynomial-time if the solution is connected. Finally, we have shown that solving equations of the form A X^k = B is polynomial if X is connected. However, numerous questions remain unanswered. The main direction for further investigation involves removing the connectivity condition. Although the cancellation property of unrolls we proved and the new polynomial-time algorithm for the division suggest that the primary challenge for FDDS division lies in the cycles rather than the transients, the same cannot be said for the computation of the k-th root of FDDS. Another intriguing direction is solving general polynomial equations P(X_1, …, X_n) = B with a constant right-hand side B. While this appears to be at least as challenging as division, some specific cases, such as when the polynomial is injective, could yield more direct results. Furthermore, the results of this work can improve the state of the art of the solution of P(X_1, …, X_n) = B where polynomial P is a sum of univariate monomials <cit.>. Indeed, a technique to solve (and enumerate the solutions) of this type of equation in a finite number of systems of equations of the form AX^k = B has been introduced. Thus, our result, which is more efficient than previously known techniques, can have a positive impact on the complexity of the proposed pipeline. It would also be interesting to investigate whether our techniques also apply to finding nontrivial solutions to equations of the form XY = B with X and Y connected, which would make it possible to improve our knowledge of the problem of irreducibility. §.§.§ SR was supported by the French Agence Nationale pour la Recherche (ANR) in the scope of the project “REBON” (grant number ANR-23-CE45-0008), and KP, AEP and MR by the EU project MSCA-SE-101131549 “ACANCOS”. splncs04 § APPENDIX prop:divideForestFini Let A, X, and B be FDDS with α equal to the number of unroll trees of B. Let n ≥α + B. Then AX = B if and only if AnXn = Bn. (⇒) If AX = B then AXn = Bn for all n ≥ 0. And since, AXn = AnXn, one direction follow. (⇐) For the other direction, we employ the same logic as in the proof of the Lemma 38 of <cit.> extending an isomorphism of the unrolls cut to depth n to an isomorphism of the whole unrolls without cuts. For this proof, we partially change the unrolls definition; more precisely, we change the set of nodes in each unroll tree. Indeed, we need to explicitly set (in the second coordinate) the root of each tree while in the former definition, the root is left implicit. Thus, as in the original definition, for each periodic state u we define an unroll tree t_u = (V,E) as having vertices V = {(s,u,k) | s ∈ f^-k(u), k ∈ℕ} and edges E = {((v,u,k),(f(v),u,k-1) ) }⊆ V^2 with f the transition function of the dynamical system. Remark that this produces an unroll tree having root (u,u,0). Let ψ : V(Bn) → V(AnXn) be a forest product isomorphism for the product AnXn = Bn. Let d : V(B)^2 →ℕ∪{-1} be the function associating each pair (u,v) to the length of the shortest directed path from u to v, if it exists in B, otherwise -1. We call D the maximum value d(u,v) with (u,v) ∈ V(B)^2 ∪ V(A)^2 ∪ V(X)^2. Let us point out that n > D. We extend ψ to ϕ:V(B) → V(AX) such that, for all (b,r,h) ∈ V(B) where h > n, we have ϕ(b,r,h) = ((a,r_1,h),(x,r_2,h)) if and only if ψ(b,r,d) = ((a,r_1,d),(x,r_2,d)) where d = max(d(b,r),d((a,x),(r_1,r_2))) = max(d(b,r),d(a,r_1),d(x,r_2)) where (a,x) and (r_1,r_2) are states of the FDDS AX. Remark that ϕ is a well-defined function, since (b,r,d) belongs to the domain of ψ, as d≤ D < n. Now we prove that ϕ is a valid forest product isomorphism. First, we show the bijectivity of ϕ. The surjectivity of ϕ is an immediate consequence of the surjectivity of ψ. As for its injectivity, suppose that ϕ(b,r,h) = ϕ(b',r',h'). We denote ϕ(b,r,h) = ((a,r_1,h),(x,r_2,h)) and ϕ(b',r',h') = ((a',r_1',h'),(x',r_2',h')). Thus (a,x,r_1,r_2,h) = (a',x',r_1',r_2',h'). By the definition of ϕ, we have ψ(b,r,d) = ((a,r_1,d),(x,r_2,d)) and ψ(b',r',d') = ((a',r_1',d'),(x',r_2',d')) = ((a,r_1,d'),(x,r_2,d')). By proving that d = d', we obtain ψ(b,r,d) = ψ(b',r',d') and, by injectivity of ψ, we deduce (b,r,d') = (b',r',d') and, in particular, b = b' and r = r'; since we already know that h = h', the injectivity of ϕ follows. Since ψ is a forest product isomorphism, we deduce that (b,r,d) and (b',r',d') are two nodes of the same tree. Indeed, the two nodes ((a, r_1,d),(x,r_2,d)) and ((a,r_1,d'),(x,r_2,d')) belong to the same tree since they have the same root coordinate. Thus, we deduce that r = r'. Moreover, since ψ is a forest product isomorphism, the distance between ((a,r_1,d),(x,r_2,d)) and infinite branch of its tree (cut to depth n) equals the distance between (b,r,d) and the infinite branch of its tree (cut to depth n). And since this distance is the depth of node (a,x) in AX and b in B, we deduce that [AX](a,x) = [B]b. For the same reason, [AX](a,x) = [B]b'. So [B]b = [B]b'. Besides, by the definition of unroll, h is the depth of (b,r,h) in the unroll tree, and we deduce that (b,r,h) and (b',r,h) have the same depth. This implies that d(b,r) = d(b',r'). Hence d = d' and, as a consequence, the injectivity of ϕ follows. Now, we show that ϕ(b,r,h) is a root if and only if (b,r,h) is a root. Since ψ is a forest product isomorphism, we have (a,r_1,d) and (x,r_2,d) are roots if and only if (b,r,d) is a root. In addition, the depth of any root is 0, so d = 0. So, we conclude that ϕ(b,r,h) is a root if and only if h = 0 and (b,r,h) is a root. Finally, we need to show that for all ((a,r_1,h), (x,r_2, h)) , ((a',r_1', h' ), (x', r_2' , h' ))∈ V(AX) we have (ϕ^-1((a,r_1,h), (x,r_2, h)) , ϕ^-1((a',r_1', h' ), (x', r_2' , h' )) ) ∈ E(B) if and only if ( (a,r_1,h) , (a',r_1',h') ) ∈ E(A) and ( (x,r_2,h) , (x',r_2',h') ) ∈ E(X). Since ψ is a forest product isomorphism, we have ( (b,r,d), (b',r',d') ) ∈ E(Bn) if and only if .99!( ((a,r_1,d), (x,r_2, d)) , ((a',r_1', d'), (x', r_2' ,d')) ) ∈ E(AnXn). So, by the definition of ϕ that is, if and only if (((a,r_1,h), (x,r_2, h)) , ((a',r_1', h' ), (x', r_2' ,h' )) ) ∈ E(AX) which is equivalent to ( (a,r_1,h) , (a',r_1',h' ) ) ∈ E(A) and ((x,r_2,h) , (x',r_2',h') ) ∈ E(X) by the Definition <ref> of tree product. This proves that B = AX.
http://arxiv.org/abs/2405.08801v2
20240514174918
Prospects of Privacy Advantage in Quantum Machine Learning
[ "Jamie Heredge", "Niraj Kumar", "Dylan Herman", "Shouvanik Chakrabarti", "Romina Yalovetzky", "Shree Hari Sureshbabu", "Changhao Li", "Marco Pistoia" ]
quant-ph
[ "quant-ph", "cs.LG" ]
Global Technology Applied Research, JPMorgan Chase, New York, NY 10017 School of Physics, The University of Melbourne, Parkville, VIC 3010, Australia niraj.x7.kumar@jpmchase.com Global Technology Applied Research, JPMorgan Chase, New York, NY 10017 Global Technology Applied Research, JPMorgan Chase, New York, NY 10017 Global Technology Applied Research, JPMorgan Chase, New York, NY 10017 Global Technology Applied Research, JPMorgan Chase, New York, NY 10017 Global Technology Applied Research, JPMorgan Chase, New York, NY 10017 Global Technology Applied Research, JPMorgan Chase, New York, NY 10017 Global Technology Applied Research, JPMorgan Chase, New York, NY 10017 Ensuring data privacy in machine learning models is critical, particularly in distributed settings where model gradients are typically shared among multiple parties to allow collaborative learning. Motivated by the increasing success of recovering input data from the gradients of classical models, this study addresses a central question: How hard is it to recover the input data from the gradients of quantum machine learning models? Focusing on variational quantum circuits (VQC) as learning models, we uncover the crucial role played by the dynamical Lie algebra (DLA) of the VQC ansatz in determining privacy vulnerabilities. While the DLA has previously been linked to the classical simulatability and trainability of VQC models, this work, for the first time, establishes its connection to the privacy of VQC models. In particular, we show that properties conducive to the trainability of VQCs, such as a polynomial-sized DLA, also facilitate the extraction of detailed snapshots of the input. We term this a weak privacy breach, as the snapshots enable training VQC models for distinct learning tasks without direct access to the original input. Further, we investigate the conditions for a strong privacy breach where the original input data can be recovered from these snapshots by classical or quantum-assisted polynomial time methods. We establish conditions on the encoding map such as classical simulatability, overlap with DLA basis, and its Fourier frequency characteristics that enable such a privacy breach of VQC models. Our findings thus play a crucial role in detailing the prospects of quantum privacy advantage by guiding the requirements for designing quantum machine learning models that balance trainability with robust privacy protection. Prospects of Privacy Advantage in Quantum Machine Learning Marco Pistoia May 20, 2024 ========================================================== § INTRODUCTION In the contemporary technological landscape, data privacy concerns command increasing attention, particularly within the domain of machine learning (ML) models that are trained on sensitive datasets. Privacy concerns are widespread in many different applications, including financial records <cit.>, healthcare information <cit.>, and location data <cit.>, each providing unique considerations. Furthermore, the multi-national adoption of stringent legal frameworks <cit.> has further amplified the urgency to improve data privacy. The introduction of distributed learning frameworks, such as federated learning <cit.>, not only promises increased computational efficiency but also demonstrates the potential for increased privacy in ML tasks. In federated learning, each user trains a machine learning model, typically a neural network, locally on their device using their confidential data, meaning that they only need to send their model gradients to the central server, which aggregates gradients of all users to calculate the model parameters for the next training step. As the user does not send their confidential data, but rather their training gradients, this was proposed as the first solution to enable collaborative learning while preventing data leakage. However, subsequent works have shown that neural networks are particularly susceptible to gradient inversion-based attacks to recover the original input data <cit.>. To mitigate the above issue, classical techniques have been proposed to enhance the privacy of distributed learning models, ranging from gradient encryption-based methods <cit.>, the addition of artificial noise in the gradients to leverage differential-privacy type techniques <cit.>, or strategies involving the use of batch training to perform gradient mixing <cit.>. These techniques, although mitagative in nature, are not fully robust since they either still leak some input information, add substantial computational overhead while training the model in the distributed setting, or result in reduced performance of the model. A natural question that follows is whether quantum machine learning can help mitigate the privacy concerns that their classical counterparts exhibit. Specifically, one is interested in exploring the fundamental question underpinning the privacy of quantum models: Given the gradients of a quantum machine learning model, how difficult is it to reconstruct the original classical data inputs? In search of quantum privacy advantages, several quantum distributed learning proposals have been previously introduced <cit.>. Previous methods for improving privacy in a federated learning context have ranged from the use of blind quantum computing <cit.>, high-frequency encoding circuits <cit.>, and hybrid quantum-classical methods that combine pre-trained classical models with quantum neural networks <cit.>. In particular, the work of <cit.> considered variational quantum circuits (VQC) as quantum machine learning models and suggested that highly expressive product encoding maps along with an overparameterized hardware efficient ansatz (HEA) would necessitate an exponential amount of resources (in terms of the number of qubits n) for an attacker to learn the input from the gradients. Their work, although the first and sole one to date to theoretically analyze the privacy of a specific VQC model architecture, has certain key drawbacks. The first is that overparameterization of a HEA leads to an untrainable model, since it mixes very quickly to a 2-design <cit.> and thus leads to a barren plateau phenomenon <cit.>. The authors enforced the requirement of overparameterization to ensure that there are no spurious local minima in the optimization landscape and that all local minima are exponentially concentrated toward global minima <cit.>. However, this requires the HEA to have an exponential depth and thus an exponential number of parameters, which precludes efficient training due to an exponential memory requirement to store and update the parameters. Secondly, the difficulty of inverting gradients to recover data primarily stems from the high expressivity, characterized in this case by an exponentially large number of non-degenerate frequencies of the generator Hamiltonian of the encoding map. Introducing high-frequency terms in the encoding map may not be an exclusive quantum effect, as classical machine learning models could also be enhanced by initially loading the data with these high-frequency feature maps <cit.>. While previous studies have aimed to highlight the advantages of employing VQC models in safeguarding input privacy, none have convincingly addressed what sets VQC models apart from classical neural networks in their potential to provide a quantum privacy advantage. A critical aspect missing in a comprehensive examination of the privacy benefits offered by VQC models in a privacy framework tailored for them. Such a framework should avoid dependence on specific privacy-enhancing procedures or architectures and instead focus on exploring the fundamental properties of VQC models that result in input privacy. To address the above concerns, we introduce a framework designed to assess the possibility of retrieving classical inputs from the gradients observed in VQC models. We consider VQCs that satisfy the Lie algebra supported ansatz (LASA) property, which has been key in establishing connections with the trainability and classical simulatability of VQCs <cit.>. Our study systematically differentiates the separate prerequisites for input reconstruction across both the variational ansatz and encoding map architectures of these VQC models as summarized in Table <ref>. Our first result concerns the properties of variational ansatz and the measurement operator of the VQC. Specifically, we show that when the VQC satisfies the LASA condition, i.e., when the measurement operator is within the dynamical Lie algebra (DLA) of the ansatz, and when the DLA scales polynomially with the number of qubits, it is possible to efficiently extract meaningful snapshots of the input, enabling training and evaluation of VQC models for other learning tasks without having direct access to the original input. We call this the weak privacy breach of the model. Further, we investigate conditions for strong privacy breach, i.e., recoverability of the original input by classical or quantum-assisted polynomial time methods. Fully reconstructing the input data from these snapshots to perform a strong privacy breach presents a further challenge which we show is dependent on properties of the encoding map, such as the hardness of classically simulating the encoding, overlap of the DLA basis with encoding circuit generators, and its Fourier frequency characteristics. The two types of privacy breach we introduce are summarised in Figure <ref> while more specific definitions regarding snapshots, recoverability, and invertibility are provided in Section <ref>. This investigation presents a comprehensive picture of strategies to extract the key properties of VQCs to provide robust privacy guarantees while ensuring that they are still trainable. We structure our paper in the following manner. Appendix <ref> provides the notation used in this work. Sec <ref> provides a general framework for studying privacy with VQC. This includes describing the VQC framework, providing Lie theoretic definitions required for this work, and the privacy definitions in terms of input recoverability. Next, Sec <ref> followed by <ref> covers a detailed analysis of the snapshot recoverability from the gradients, and snapshot inversion to recover the input, respectively. Sec <ref> establishes the connections between privacy and the well-studied trainability of VQCs, and Sec <ref> consequently highlights the future directions of enabling robust privacy with quantum machine learning models. We finally conclude our results in Sec <ref>. § GENERAL FRAMEWORK §.§ Variational Quantum Circuits for Machine Learning A variational quantum circuit (VQC) is described in the following manner. We consider the d dimensional input vector 𝐱∈𝒳⊂ℝ^d, which is loaded into the quantum encoding circuit V(𝐱) of n qubits to produce a feature map with the input state mapping, ρ(𝐱) = V(𝐱)|0⟩^⊗ n⟨0|^⊗ n V(𝐱)^†. This operation loads the input vector of dimension d to a Hilbert space ℋ = (ℂ^2)^⊗ n of dimension dim(ρ(𝐱)) = 2^n. We will explicitly consider the scenario where n = Θ(d), which is a common setting in most existing VQC algorithms and hence the number of qubits in a given algorithm will be of the same order as the input vector dimension d. The state ρ(𝐱) is then passed through a variational circuit ansatz U(θ) defined as 𝐔(θ) = ∏_k=1^D e^- i θ_k𝐇_k, which is parameterized by a vector of variational parameters θ = [θ_1,⋯, θ_D], where D is the total number of variational parameters. Here {𝐇_1,⋯, 𝐇_D} are the set of D Hermitian generators of the circuit U. We note that the above structure is quite general since some common ansatz structures such as the hardware efficient ansatz, the quantum alternating operator ansatz, and Hamiltonian variational ansatz among others, are all encapsulated in this framework as highlighted in <cit.>. The parameterized state ρ(𝐱) is passed through a variational circuit denoted by U(θ), followed by the measurement of some observable 𝐎∈ℋ. For a given θ, the output of the variational quantum circuit model is expressed as the expectation value of 𝐎 with the parameterized state, y_θ(𝐱) = Tr(𝐔^†(θ) O𝐔(θ) ρ(𝐱)). For the task of optimizing the variational quantum circuits, the model output is fed into the desired cost function (θ, 𝐱) which is subsequently minimized to obtain, θ ^* = θarg min(θ, 𝐱), where θ^* are the final parameter values after optimization. Typical examples of cost functions include cross-entropy loss, and mean-squared error loss, among others <cit.>. The typical optimization procedure involves computing the gradient of the cost function with respect to the parameters θ, which in turn, involves computing the gradient with respect to the model output y_θ(𝐱) C_j = ∂ y_θ(𝐱)/∂θ_j, j ∈ [D]. Going forward, we will directly deal with the recoverability of input 𝐱 given C_j, instead of working with specific cost functions. Details of how to reconstruct our results when considering gradients with respect to specific cost functions are covered in Appendix <ref>. §.§ Lie Theoretic Framework We review some introductory as well as recent results on Lie theoretic framework for variational quantum circuits which are relevant to our work. For a more detailed review of this topic, we refer the reader to <cit.>. We provide the Lie theoretic definitions for a periodic ansatz of the form Eq <ref>. The dynamical Lie algebra (DLA) for an ansatz 𝐔(θ) of the form Eq <ref> is defined as the real span of the Lie closure of the generators of U = span_ℝ⟨ i𝐇_1, ⋯, i𝐇_D ⟩_Lie, where the closure is defined under taking all possible nested commutators of S = {i𝐇_1,⋯, i𝐇_D}. In other words, it is the set of elements obtained by taking the commutation between elements of S until no further linearly independent elements are obtained. The dynamical Lie group 𝒢 for an ansatz 𝐔(θ) of the form of Eq <ref> is determined by the DLA such that, 𝒢 = e^, where e^ := {e^i𝐇, i𝐇∈} and is a subgroup of (2^n). For generators in , the set of all 𝐔(θ) of the form Eq <ref> generates a dense subgroup of 𝒢. The Lie algebra adjoint representation is the following linear action: ∀𝐊, 𝐇∈, _𝐇𝐊 := [𝐇, 𝐊] ∈, and the Lie group adjoint representation is the following linear action ∀𝐔∈𝒢, ∀𝐇∈, _𝐔𝐇 := 𝐔^†𝐇𝐔∈. The basis of the DLA is denoted as {i𝐁_α}_α, α∈{1,⋯,()}, where 𝐁_α are Hermitian operators and form an orthonormal basis of with respect to the Frobenius inner product. Any observable 𝐎 is said to be entirely supported by the DLA whenever i𝐎∈, or in other words 𝐎 = ∑_αμ_α𝐁_α, where μ_α is coefficient of support of 𝐎 in the basis 𝐁_α. A Lie Algebra Supported Ansatz (LASA) is a periodic ansatz of the form Eq <ref> of a VQC where the measurement operator O is completely supported by the DLA associated with the generators of U(θ), that is, iO∈. In addition to its connections to the trainability of a VQC, this condition also implies that ∀θ, U^†(θ)iOU(θ) ∈, which enables us to express the evolution of the observable O in terms of elements of . This is key to some simulation algorithms that are possible for polynomial-sized DLAs <cit.>. §.§ Input Recoverability Definitions In this section, we provide meaningful definitions of what it means to recover the classical input data given access to the gradients {C_j}_j=1^D of a VQC. Notably, our definitions are motivated in a manner that allows us to consider the encoding and variational portions of a quantum variational model separately. A useful concept in machine learning is the creation of data snapshots. These snapshots are compact and efficient representations of the input data's feature map encoding. Essentially, a snapshot retains enough information to substitute for the full feature map encoded data, enabling the training of a machine learning model for a distinct task with the same data but without the need to explicitly know the input data was passed through the feature map. For example, in methods such as 𝔤-sim <cit.>, these snapshots are used as input vectors for classical simulators. The simulator can then process these vectors efficiently under certain conditions, recreating the operation of a variational quantum circuit. It will become useful to classify the process of input data 𝐱 recovery into two stages; the first concerns recovering snapshots of the quantum state ρ(𝐱) (Eq <ref>) from the gradients, which involves only considering the variational part of the circuit. Given the gradients C_j, j ∈ [D] as defined in Eq <ref> as well as the parameters θ = [θ_1, ⋯, θ_D], we consider a VQC to be snapshot recoverable if there exists an efficient 𝒪(poly(d, 1/ϵ)) classical polynomial time algorithm to recover the vector 𝐞_snap such that, |[𝐞_snap]_α - (𝐁_αρ(𝐱))| ≤ϵ, ∀α∈ [()] , for some {𝐁_α} forming an Frobenius-orthonormal basis of the DLA corresponding to U(θ) in Eq <ref>, and the above holds for any ϵ > 0. We call 𝐞_snap the snapshot of 𝐱. In other words, 𝐞_snap is the orthogonal projection of the input state ρ(𝐱) onto the DLA of the ansatz, and thus the elements of 𝐞_snap are the only components of the input state that contribute to the generation of the model output y_θ(𝐱) as defined in Eq <ref>. Here, we constitute the retrieval of the snapshot 𝐞_snap of a quantum state ρ(𝐱) as weak privacy breach, since the snapshot could be used to train the VQC model for other learning tasks involving the same data {𝐱} but without the need to use the actual data. As an example, consider an adversary that has access to the snapshots corresponding to the data of certain customers. Their task is to train the VQC to learn the distinct behavioral patterns of the customers. It becomes apparent that the adversary can easily carry out this task without ever needing the original data input since the entire contribution of the input 𝐱 in the VQC output decision-making y_θ(𝐱) is captured by 𝐞_snap. Next, we consider the stronger notion of privacy breach in which the input data 𝐱 must be fully reconstructed. Assuming that the snapshot has been recovered, the second step we therefore consider is inverting the recovered snapshot 𝐞_snap to find the original data 𝐱, a process that is primarily dependent on the encoding part of the circuit. Within our snapshot inversion definition, we consider two cases that enable different solution strategies, snapshot inversion utilizing purely classical methods, and snapshot inversion methods that can utilize quantum samples. Given the snapshot 𝐞_snap as the expectation values of the input state ρ(𝐱), we say that VQC admits classical snapshot invertibility, if there exists an efficient 𝒪(poly(d, 1/ϵ)) polynomial time classical randomized algorithm to recover 𝐱' : 𝐱' - 𝐱_2 ≤ϵ, with probability at least p = 2/3, for any user defined ϵ >0. Given the snapshot 𝐞_snap as the expectation values of the input state ρ(𝐱), and the ability to query poly(d, 1/ϵ) number of samples from the encoding circuit V to generate snapshots 𝐞_snap' for any given input 𝐱', we say that VQC admits quantum-assisted snapshot invertibility, if there exists an efficient 𝒪(poly(d, 1/ϵ)) polynomial time classical randomised algorithm to recover 𝐱' : 𝐱' - 𝐱_2 ≤ϵ, with probability at least p = 2/3, for any user defined ϵ >0. In this work, we specifically focus on input recoverability by considering the conditions under which VQC would admit snapshot recovery followed by snapshot invertibility. Considering these two steps individually allows us to delineate the exact mechanisms that contribute to the overall recovery of the input. It is important to mention that it may potentially only be possible to recover the inputs of a VQC up to some periodicity, such that there only exists a classical polynomial time algorithm to recover 𝐱̃ = 𝐱 + 𝐤π up to ϵ-closeness, where 𝐤∈ℤ. As the encodings generated by quantum feature maps inherently contain trigonometric terms, in the most general case it may therefore only be possible to recover 𝐱 up to some periodicity. However, this can be relaxed if the quantum feature map is assumed to be injective. Figure <ref>. shows a diagram that highlights the Lie algebraic simulation method <cit.> along with specifications of the input recovery framework as defined in this work. § SNAPSHOT RECOVERY This section addresses the weak privacy notion of recovering the snapshots of the input as introduced in Def <ref>. As the name implies, the goal here is to recover the vector 𝐞_snap for some Schmidt orthonormal basis {𝐁_α}_α∈dim() of the DLA corresponding to the VQC ansatz 𝐔(θ), given that the attacker is provided the following information, * D gradient information updates C_j = ∂ y_θ(𝐱)/∂θ_j, j∈ [D] as defined in Eq <ref>. * Ansatz architecture 𝐔(θ) presented as an ordered sequence of Hermitian generators {θ_k, 𝐇_k}_k=1^D, where 𝐇_k is expressed as a polynomial (in the number of qubits) linear combination of Pauli strings. * Measurement operator 𝐎 which satisfies the LASA condition according to Def <ref> and expressed as a polynomial (in the number of qubits) linear combination of Pauli strings Recovering these snapshots will enable an attacker to train the VQC model for other learning tasks that effectively extract the same information from the input states ρ(𝐱) but without the need to use the actual data. The main component of the snapshot recoverability algorithm makes use of the -sim <cit.> framework, which we briefly review in the following subsection while also clarifying some previously implicit assumptions, to construct a system of linear equations that can be solved to recover 𝐞_snap as detailed in Algorithm <ref>. §.§ Review of Lie-Algebraic Simulation Framework We start by reviewing the -sim framework <cit.> for classically computing the cost function and gradients of VQCs, when the observable lies in the DLA of the chosen ansatz. Specifically, this framework evolves the expectation values of observables via the adjoint representation. However, a necessary condition for this procedure to be efficient is that the dimension of the DLA (dim()) is only polynomially growing in the number of qubits. The first step of -sim consists of building an orthonormal basis for the DLA given ({θ_k, 𝐇_k})_k=1^D. Algorithm <ref> presents a well-known procedure to do this. The procedure simply computes pairwise commutators until no new linearly independent elements are found. Given that all operators are expressed in the Pauli basis, the required orthogonal projectors and norm computations performed by Algorithm <ref> can be performed efficiently. If the dimension of DLA is 𝒪(poly(n)), then the iteration complexity, i.e., the number of sets of commutators that we compute, of this procedure is polynomial in n. However, an important caveat is that potentially the elements forming our estimation for the DLA basis could have exponential support on the Pauli basis, which is a result of computing new pairwise commutators at each iteration. Thus, for this overall procedure to be efficient, we effectively require that the nested commutators of the generators 𝐇_k do not have exponential support on the Pauli basis. A set of Hermitian generators {𝐇_1, …, 𝐇_D} on n-qubits expressed as linear combinations of 𝒪(poly(())) Pauli strings satisfies the slow Pauli expansion condition if ∀ r ∈ [], [𝐇_r, [⋯, [𝐇_2, 𝐇_1]]] can be expressed as a linear combination of 𝒪(poly(())) Pauli strings. In general, it is unclear how strong of an assumption this is, which means that the attacks that we present may not be practical for all VQCs that satisfy the polynomial DLA condition, and thus privacy preservation may still be possible. Also, it does not seem to be possible to apply the -sim framework without the slow Pauli expansion condition. Lastly, a trivial example of a set of Hermitian generators that satisfies the slow Pauli expansion are those for the quantum compound ansatz discussed in <cit.>. Given the orthonormal basis 𝐁_α for , under the LASA condition, we can express 𝐎=∑_α∈ [dim()]μ_α𝐁_α, and hence we can write the output as y_θ(𝐱) = Tr(𝐔^†(θ) 𝐎𝐔(θ) ρ(𝐱)) = ∑_αTr(μ_α𝐔^†𝐁_α𝐔ρ(𝐱) ) = ∑_αTr(μ_αAd_𝐔( 𝐁_α)ρ(𝐱) ). In addition, given the form of 𝐔, we can express Ad_𝐔 as, Ad_𝐔 = ∏_k=1^D e^-θ_kad_i𝐇_k. We can also compute the structure constants for our basis 𝐁_α, which is the collection of () ×() matrices for the operators ad_i𝐁_α. As a result of linearity, we also have the matrix for each ad_i𝐇 for 𝐇∈ in the basis 𝐁_α. Then by performing matrix exponentiation and multiplying () ×() we can compute the matrix for Ad_𝐔. Using the above, the model output may be written, y_θ = ∑_α, βμ_α [Ad_𝐔]_αβTr( 𝐁_βρ(𝐱)) = μ^TAd_𝐔𝐞_snap, where 𝐞_snap is a vector of expectation values of the initial state, i.e. [𝐞_snap]_β = Tr[ 𝐁_βρ(𝐱)]. Similar to the cost function, the circuit gradient can also be computed via -sim. Let, C_j = ∂ y_θ/∂θ_j = μ^T ∂Ad_𝐔/∂θ_j𝐞_snap =: χ^(j)·𝐞_snap, where the adjoint term differentiated with respect to θ_j can be written as, ∂Ad_𝐔/∂θ_j = [∏_k=j^D e^θ_kad_i𝐇_k] ad_i𝐇_j[∏_k=1^je^θ_kad_i𝐇_k]. The components of χ^(j) can be expressed as, χ_β^(j) = ∑_αμ_α[∂Ad_𝐔/∂θ_j]_α, β, allowing C_j terms to be represented in a simplified manner as C_j = ∑_β=1^dim()χ_β^(j)[ 𝐞_snap]_β. The key feature of this setup is that the matrices and vectors involved have dimension (𝔤), therefore for a polynomial-sized DLA, the simulation time will scale polynomially and model outputs can be calculated in polynomial time <cit.>. Specifically, the matrices for each ad_i𝐇_k in the basis {𝐁_l} and Ad_𝐔 are polynomial in this case. This Lie-algebraic simulation technique was introduced in order to show efficient methods of simulating LASA circuits with polynomially sized DLA. In this work, we utilize the framework in order to investigate the snapshot recovery of variational quantum algorithms. Based on the above discussion, the proof of the following theorem is self-evident. If ansatz family 𝐔(θ) with an observable 𝐎 satisfies both the LASA condition and Slow Pauli Expansion, then the cost function and its gradients can be simulated with complexity 𝒪(poly( ())) using a procedure that at most queries a quantum device a polynomial number of times to compute the ()-dimensional snapshot vector 𝐞_snap. §.§ Snapshot Recovery Algorithm With the framework for the 𝔤-sim <cit.> established, we focus on how snapshots 𝐞_snap of the input data can be recovered using the VQC model gradients C_j, with the process detailed in Algorithm <ref>. In particular, the form of Eq <ref> allows a set-up leading to the recovery the snapshot vector 𝐞_snap from the gradients {C_j}_j=1^D, but requires the ability to solve the system of D linear equations given by {C_j} with dim() unknowns [𝐞_snap]_β∈dim(). The following theorem formalizes the complexity of recovering the snapshots from the gradients. Given the requirements specified in Algorithm <ref>, along with the assumption that the number of variational parameters D ≥dim(), where dim() is the dimension of the DLA , the VQC model admits snapshot 𝐞_snap recovery with complexity scaling as 𝒪(poly( ())). Firstly, we note that given the gradients C_j and parameters θ_j∈ [D], the only unknowns are the components of the vector 𝐞_snap of length dim(𝔤). Therefore, it is necessary to have dim(𝔤) equations in total; otherwise, the system of equations would be underdetermined and it would be impossible to find a unique solution. The number of equations depends on the number of gradients and, therefore, the number of variational parameters in the model, hence the requirement that D ≥dim(). Assuming now that we deal with the case that there are D ≥dim(𝔤) variational parameters of the VQC model, we can therefore arrive at a determined system of equations. The resulting system of simultaneous equations can be written in a matrix form as, [ C_1; C_2; ⋮; C_D ]_D × 1 = [ χ_1^(1) χ_2^(1) ⋯ χ_dim(𝔤)^(1); χ_1^(2) χ_2^(2) ⋯ χ_dim(𝔤)^(2); ⋮ ⋮ ⋱ ⋮; χ^(D)_dim(𝔤) χ^(D)_dim(𝔤) ⋯ χ_dim(𝔤)^(D); ]_D ×dim()[ [𝐞_snap]_1; [𝐞_snap]_2; ⋮; [𝐞_snap]_dim(𝔤) ]_dim() × 1. In order to solve the system of equations highlighted in Eq <ref> to obtain 𝐞_snap, we first need to compute the coefficients {χ^(j)_β}_j∈ [D], β∈ [dim()]. This can done by the -sim procedure highlighted in the previous section and in steps 1-7 in Algorithm <ref> with complexity 𝒪(poly(dim())). The next step is to solve the system of equations, i.e., step 8 of Algorithm <ref>, which can solved using Gaussian elimination procedure incurring a complexity 𝒪(dim()^3) <cit.>. Thus, the overall complexity of recovering the snapshots from the gradients is 𝒪(poly(dim())). This completes the proof. In the case that the dimension of DLA is exponentially large dim(𝔤) = 𝒪(exp (n)), then performing snapshot recovery by solving the system of equations would require an exponential number of gradients and thus an exponential number of total trainable parameters D = 𝒪(exp (n)). However, this would require storing an exponential amount of classical data, as even the variational parameter array θ would contain 𝒪(exp (n)) many elements and hence this model would already breach the privacy definition which only allows for a polynomial (in n = Θ(d)) time attacker. In addition, the complexity of obtaining the coefficients χ^(j)_β and subsequently solving the system of linear equations would also incur a cost exponential in n. Hence, for the system of simultaneous equations to be determined, it is required that dim(𝔤) = 𝒪(poly (n)). Under the above requirement, it will also be possible to solve the system of equations in Eq <ref> in polynomial time and retrieve the snapshot vector 𝐞_snap. Hence, a model is snapshot recoverable if the dimension of the DLA dimension scales polynomially in d. § SNAPSHOT INVERTIBILITY We have shown that in the case that the DLA dimension of the VQC is polynomial in the number of qubits n and the slow Pauli expansion condition (Def <ref>) is satisfied, then it is possible to reverse engineer the snapshot vector 𝐞_snap from the gradients. As a result, this breaks the weak-privacy criterion. The next step in terms of privacy analysis is to see if a strong privacy breach can also occur. This is true when it is possible to recover the original data 𝐱 that was used in the encoding step to generate the state ρ(𝐱); the expectation values of this state with respect to the DLA basis elements forms the snapshot 𝐞_snap. Hence, even if the DLA is polynomial and snapshot recovery allows the discovery of 𝐞_snap, there is still the possibility of achieving some input privacy if 𝐞_snap cannot be efficiently inverted to find 𝐱. The overall privacy of the VQC model, therefore, depends on both the data encoding and the variational ansatz. One common condition that is necessary for our approaches to snapshot inversion is the ability to compute the expectation values Tr(ρ(𝐱') 𝐁_k), ∀ k ∈ [dim()] for some guess input 𝐱'. This is the main condition that distinguishes between completely classical snapshot inversion and quantum-assisted snapshot inversion. It is well-known that computing expectation values of specific observables is a weaker condition than ρ(𝐱) being classically simulatable <cit.>. Hence, it may be possible to classically perform snapshot inversion even if the state ρ(𝐱) overall is hard to classically simulate. In the quantum-assisted case it is always possible to calculate Tr(ρ(𝐱' 𝐁_k) values by taking appropriate measurements of the encoding circuit V(𝐱') . In the first subsection, we present inversion attacks that apply to commonly-used feature maps and explicitly make use of knowledge about the locality of encoding circuit. The common theme among these feature maps is that by restricting to only a subset of the inputs, it is possible to express the ρ(𝐱) or expectations there of in a simpler way. The second subsection focuses on arbitrary encoding schemes by viewing the problem as black-box optimization. In general, snapshot inversion can be challenging or intractable even if the snapshots can be efficiently recovered and/or the feature map can be classically simulated. Our focus will be on presenting sufficient conditions for performing snapshot inversion, which leads to suggestions for increasing privacy. §.§ Snapshot Inversion for Local Encodings For efficiency reasons, it is common to encode components of the input vector 𝐱 in local quantum gates, typically just single-qubit rotations. The majority of the circuit complexity is usually either put into the variational part or via non-parameterized entangling gates in the feature map. In this section, we demonstrate attacks to recover components of 𝐱, up to periodicity, given snapshot vectors when the feature map encodes each x_j locally. More specifically, we put bounds on the allowed amount of interaction between qubits that are used to encode each x_j. In addition, we also require that the number of times the feature map can encode a single x_j be sufficiently small. While the conditions will appear strict, we note that they are satisfied for some commonly used encodings, e.g., the Pauli product feature map or Fourier tower map <cit.> which was previously used in a VQC model that demonstrated resilience to input recovery. For the Pauli product encoding, we show that a completely classical snapshot inversion attack is possible. An example of a Pauli product encoding is the following: ⊗_j_1^nρ_j(x_j) = ⊗_j_1^nR_𝖷(x_j)|0⟩⟨0|R_𝖷(-x_j). where R_𝖷 is the parameterized Pauli 𝖷 rotation gate. The Fourier tower map is similar to Equation (<ref>) but utilizes a parallel data reuploading scheme, i.e. ⊗_j=1^d( ⊗_l=1^m R_𝖷( 5^l-1 x_j) ). where n = d m, with m being the number of qubits used to encode a single dimension of the input. §.§.§ Pauli Product Encoding The first attack that we present will specifically target Equation (<ref>). However, the attack does apply to the Fourier tower map as well. More generally, the procedure applies to any parallel data reuploading schemes of the form: ⊗_j=1^d( ⊗_l=1^m R_𝖷(α_l x_j) ). We explicitly utilize Pauli 𝖷 rotations, but a similar result holds for 𝖸 or 𝖹. For Pauli operator 𝖯, let 𝖯_j := i 𝕀^⊗(j-1)⊗𝖯⊗𝕀^⊗(n-j). Suppose that the polynomial DLA and slow Pauli expansion (Def <ref>) conditions are satisfied. Also, suppose that we are given a snapshot vector 𝐞_snap(𝐱) for a VQC with trainable portion 𝐔(θ) with DLA and Pauli product feature encoding (Equation (<ref>)) and the corresponding DLA basis elements (𝐁_k)_k=1^(). The classical Algorithm <ref> outputs an ϵ estimate of x_j, up to periodicity, or outputs , with time 𝒪(poly(n)log(1/ϵ)). Steps 1-5 can be performed in 𝒪(poly(n)) classical time due to the polynomial DLA and slow Pauli expansion conditions. The purpose of step 5 is to compute the angles between the linear subspaces and span_ℝ{ i𝖹_j, i𝖸_j}. This is to identify if there is any intersection, i.e. if ∃α, β such that α i𝖹_j + β i𝖸_j ∈, which is identified by singular values equal to 1. The algorithm cannot proceed if the intersection is trivially empty as the snapshot vector does not provide the required measurement to obtain x_j efficiently with this scheme. From now on, we suppose that such an element was found. We can without loss of generality just focus on the one-qubit reduced density matrix for x_j. In this case, using Bloch sphere representation: ρ_j(x_j) = R_𝖷(x_j)|0⟩⟨0|R_𝖷(-x_j)= 𝕀 - sin(x_j)𝖸 +cos(x_j)𝖹/2, so that Tr([α𝖹_j + β𝖸_j]ρ_j(x_j)) = α/2cos(x_j) - β/2sin(x_j) =sign(α)/2√(α^2 + β^2)cos(x_j + tan^-1(β/α)). However, by assumption, γ_k ∈ℝ such that α i𝖹_j + β i𝖸_j = ∑_k=1^()γ_i𝐁_kTr([α𝖹_j + β𝖸_j]ρ_j(x_j)) = ∑_k=1^()γ_k[𝐞_snap]_k. So to recover x_j, we only need to solve: ∑_k=1^()γ_k[𝐞_snap]_k = sign(α)/2√(α^2 + β^2)cos(x_j + tan^-1(β/α)), which after rearranging allows the recovery of x_j = cos^-1[2/sign(α)√(α^2 + β^2)∑_k=1^()γ_k[𝐞_snap]_k ] - tan^-1(β/α), up to periodicity. By the polynomial DLA and slow Pauli expansion conditions (i.e. all DLA basis elements are expressed as linear combinations of Paulis), we can compute γ_k in 𝒪(poly(n)log(1/ϵ)) time. For illustrative purposes, we show in Figure <ref> the snapshot inversion process for the special case where i𝖹_j ∈, i.e. x_j = cos^-1(2γ^(j)·𝐞_snap), for i𝖹_j = ∑_k=1^()γ_k^(j)𝐁_k. The general parallel data reuploading case can be handled by applying the procedure to only one of the rotations that encodes at x_j at a time, checking to find one that does not cause the algorithm to return FAILURE. §.§.§ General Pauli Encoding We now present a more general procedure that applies to feature maps that use serial data reuploading and multi-qubit Paulis. However, we introduce a condition that ensures that each x_j is locally encoded. More generally, we focus our discussion on encoding states that may be written as a tensor product of Ω subsystems, i.e. multipartite states. ρ(𝐱) = ⊗_ J ∈𝒫ρ_J(𝐱), where (𝗑_J) is constant. The procedure is highlighted in Algorithm <ref> and requires solving a system of polynomial equations. In addition, the procedure may not be completely classical as quantum assistance may be required to compute certain expectation values of ρ_J(𝐱), specifically with respect to the DLA basis elements. For simplicity, the algorithm and the theorem characterizing the runtime ignore potential errors in estimating these expectations. If classical estimation is possible, then we can potentially achieve a 𝒪(poly(log(1/ϵ))) scaling. However, if we must use quantum, then we will incur a 𝒪(1/ϵ) (due to amplitude estimation) dependence, which can be significant. Theorem <ref> presents the attack complexity ignoring these errors. Suppose that the feature encoding state ρ(𝐱) is a multipartite state, specifically there exists a partition 𝒫 of qubits [n] such that ρ(𝐱) = ⊗_ J ∈𝒫ρ_J(𝐱), where we define 𝗑_J ⊆𝐱 to be components of 𝐱 on which ρ_J depends. In addition, we have as input an 𝒪((n))-dimensional snapshot vector 𝐞_snap with respect to a known basis 𝐁_k for the DLA of the VQC. Suppose that for ρ_J(𝐱) the following conditions are satisfied: * (𝐱_J) = 𝒪(1), * each x_k is encoded at most R=𝒪((n)) times in, potentially multi-qubit, Pauli rotations. * and the set 𝒮_J = { k : Tr(𝐁_kρ_J(𝐱)) ≠ 0 & Tr(𝐁_kρ_J^c(𝐱)) =0 } has cardinality at least (𝗑_J), where J^c := [n] - J. then the model admits quantum-assisted snapshot inversion for recovering 𝗑_J. Furthermore, a classical snapshot inversion can be performed if ∀ k, (𝐁_kρ_J(𝐱)) can be evaluated classically for all 𝐱. Overall, ignoring error in estimating (𝐁_kρ_J(𝐱)), with the chosen parameters, this leads to a 𝒪((n, log(1/ϵ))) algorithm. Given that each x_k is encoded with multiqubit Pauli rotations, i.e. possible eigenvalues are 1 and -1, it is well known <cit.> that the following holds: f_k(𝗑_J) = Tr(𝐁_kρ_J(𝐱)) = α_0 + ∑_r ∈ [R]^dim(𝗑_J)α_re^i𝐫·𝐱_r, ∀ k ∈𝒮, and Tr(𝐁_kρ_J(𝐱)) is real. The set 𝒮_J is to ensure that we can isolate a subsystem where (𝗑_J) is constant. To ensure that the number of terms is 𝒪(poly(n)) it suffices to restrict to dim(𝗑_J)=𝒪(log(n)), R= 𝒪(log(n)). The α coefficients can be computed by evaluating Tr(𝐁_kρ_J(𝐱)) at 2R^dim(𝗑_J) +1 = 𝒪(poly(n)) different points 𝐱'. Depending on whether Tr(𝐁_kρ_J(𝐱)) can be evaluated classically or quantumly implies whether this falls under classical or quantum-assisted snapshot inversion. This leads to a system of dim(𝗑_J) equations in 𝗑_J: [𝐞_snap]_k = f_k(𝗑_J), k=1, …, (). Using the Chebyshev polynomials T_n, U_n of the first and second kind, respectively, we can expressed the system as a system of polynomial equations with additional constraints: [𝐞_snap]_k = [α_0 + ∑_r ∈ [R]^dim(𝗑_J)α_𝐫∏_j=1^(𝗑_J)(T_r_j(u_j) + iv_jU_r_j-1(u_j))], k ∈𝒮_J u_j^2 + v_j^2 = 1, j ∈ J, where u_j = cos(x_j), v_k =sin(x_j). In addition, we use the Chebyshev polynomials defined as cos(nθ) = T_n(cos(θ)) and sin(θ)U_n-1(cos(θ))=sin(nθ). By our assumption that the DLA is polynomial, we have 𝒪(poly(n)) equations in 2(𝗑_J) = 𝒪(loglog(n)) unknowns. If all conditions until now are satisfied, we will have successfully written down a system of determined simultaneous equations. Considering bounds from computational geometry we note that in the worst-case of Buchberger's algorithm <cit.> the degrees of a reduced Gröbner basis are bounded by M = 2 ( Δ^2/2 + Δ)^2^Q - 2, where Δ is the maximum degree of any polynomial and Q is the number of unknown variables <cit.>. For a system of linear equations, it was shown that a worst case degree bound grows double exponentially in the number of variables <cit.>. The maximum degree of any equation in Eq <ref> is Δ = R^(𝗑_J), and Q = 2(𝗑_J) so that M = 𝒪(R^2(𝗑_J)2^(𝗑_J)), so for our chosen conditions the maximum degree is bounded by M = 𝒪(poly(n)). Buchberger's algorithm provides a Gröbner basis in which backsubstituion could be used to solve equations in one variable. Numerical methods for solving polynomials in one variable generally scale polynomially in the degree. For solving each univariate polynomial at each step of the back substitution, we can apply a polynomial root-finding method, such that Jenkins–Traub <cit.>, which can achieve at least quadratic global convergence (converge from any initial point and at a rate that is at least loglog(1/ϵ)). This leads to an overall 𝒪(poly(n, log(1/ϵ)) algorithm, ignoring error in estimating (𝐁_jρ_J). In the case a circuit has an encoding structure that leads to a separable state, we have indicated conditions that guarantee snapshot inversion can be performed. If the model is also snapshot recoverable, by having a polynomially sized DLA, then this means the initial data input can be fully recovered from the gradients, and hence the attack constitutes a strong privacy breach. §.§ Snapshot Inversion for Generic Encodings In the general case but still () = 𝒪(poly(n)), where it is unclear how to make efficient use of our knowledge of the circuit, we attempt to find an 𝐱 via black-box optimization methods that produces the desired snapshot signature. More specifically, suppose for simplicity we restrict our search to [-1, 1]^d. We start with an initial guess for the input parameters, denoted as 𝐱', and use these to calculate expected snapshot values Tr[𝐁_kρ(𝐱')]. A cost function can then be calculated that compares this to the true snapshot, denoted 𝐞_snap. As an example, one can use the mean squared error as the cost function, f(𝐱') = 𝐞_snap - (Tr[𝐁_kρ(𝐱')])_k=1^()^2_2 = ∑_k ∈ [dim()]([𝐞_snap]_k - Tr[𝐁_kρ(𝐱')])^2. The goal will be to solve the optimization problem min_𝐱' ∈ [-1, 1]^d f(𝐱'). For general encoding maps, it appears that we need to treat this as a black-box optimization problem, where we evaluate the complexity in terms of the evaluations of f or, potentially, its gradient. However, in our setting, it is unclear what is the significance of finding approximate local minimum, and thus it seems for privacy breakage, we must resort to an exhaustive grid search. For completeness, we still state results on first-order methods that can produce approximate local minima. We start by reviewing some of the well-known results for black-box optimization. We recall Lipschitz continuity by, A function f : ℝ^d →ℝ is said be be L-Lipschitz continuous if there exists a real positive constant L > 0 for which, | f(𝐱) - f(𝐲) |≤ L ‖𝐱 - 𝐲‖_2. If we consider the quantum circuit as a black-box L-Lipschitz function and 𝐱' in some convex, compact set with diameter P (e.g. [-1, 1]^d with diameter 2√(d)). One can roughly upper bound L by the highest frequency component of multidimensional trig series for f, which can be an exponential in n quantity. In this case the amount of function evaluations that would be required to find 𝐱' such that 𝐱 - 𝐱'_2 ≤ϵ would scale as 𝒪( P(L/ϵ)^d ), which is the complexity of grid search <cit.>. Thus if for constant L this is a computationally daunting task, i.e. exponential in d =Θ(n). As mentioned earlier, we it is possible to resort to first-order methods to obtain, an effectively dimension-independent, algorithm for finding an approximate local minima. We recall the definition of β-smoothness as, A differentiable function f : ℝ^d →ℝ is said be be β-smooth if there exists a real positive constant β > 0 for which ‖∇ f(x) - ∇ f(y) ‖_2≤β‖ x - y ‖_2. If we have access to gradients of the cost function with respect to each parameter, then using perturbed gradient descent <cit.> would roughly require 𝒪̃( P Lβ/ϵ^2), function and gradient evaluations for an L-Lipschitz function that is β-smooth to find an approximate local min. With regards to first-order optimization, computing the gradient of f can be expressed in terms of computing certain expectations values of ρ, either via finite-difference approximation or the parameter-shift rule for certain gate sets <cit.>. Regardless whether recovering an approximate local min reveals any useful information about 𝐱, up to periodicity, it is still possible to make such a task challenging for an adversary. In general, the encoding circuit will generate expectation values with trigonometric terms. To demonstrate, we can consider a univariate case of a single trigonomial f(x) = sin(ω x), with frequency ω. This function will be ω-Lipschitz continuous with ω^2-Lipchitz continuous gradient. Hence, when considering the scaling of gradient-based approach in Eq <ref> we see that the frequency of the trigonometric terms will directly impact the ability to find a solution. Hence, if selecting a frequency that scales exponentially ω = 𝒪(exp(n)), then snapshot inversion appears to be exponentially difficult with this technique. Importantly, if the feature map includes high frequency terms, for example the Fourier Tower map of <cit.>, then β and L can be 𝒪(exp(n)). However, as noted in Section <ref> it is possible to make use of the circuit structure to obtain more efficient attacks. In addition, a poor local minimum may not leak any information about 𝐱. §.§.§ Direct Input Recovery Note that it also may be possible to completely skip the snapshot recovery procedure and instead variationally adjust 𝐱' so that the measured gradients of the quantum circuit C_j', match the known gradients C_j with respect to the actual input data. This approach requires consideration of the same scaling characteristics explained in Eq <ref>, particularly focusing on identifying the highest frequency component in the gradient spectrum. If the highest frequency term in the gradient C_j scales exponentially, ω = 𝒪(exp(n)), then even gradient descent based methods are not expected to find an approximate local min in polynomial time. Further privacy insights can be gained from Eq <ref> where a direct relationship between the gradients and the expectation value snapshot is shown, which in general can be written as C_j(𝐱) = χ_t^(j)·𝐞_snap(𝐱). This indicates that the highest frequency terms of any 𝐞_snap component will also correspond to the highest frequency terms in C_j(𝐱), as long as its respective coefficient is non-zero χ_t^(j)≠ 0. This underscores scenarios where direct input recovery may prove more challenging compared to snapshot inversion, particularly in a VQC model. Consider a subset 𝐞̃_snap⊆𝐞_snap where each component has the highest frequency that scales polynomially with n. If there are sufficiently many values in 𝐞̃_snap then recovering approximate local min to Eq <ref> may be feasible for these components. However, for gradient terms C_j(𝐱) that depend on all values of 𝐞_snap, including terms outside of 𝐞̃_snap that exhibit exponential frequency scaling, then gradient descent methods may take exponentially long when attempting direct input inversion, even if recovering approximate local minima to the snapshot inversion task can be performed in polynomial time. Investigations into direct input recovery have been covered in previous work <cit.> where the findings concluded that the gradients generated by C_j(𝐱) would form a loss landscape dependent on the highest frequency ω generated by the encoding circuit, indicating that exponentially scaling frequencies led to models that take exponential time to recover the input using quantum-assisted direct input recovery. The Fourier tower map encoding circuit used in <cit.> was designed such that ω scales exponentially to provide privacy, this was done by using m qubits in a sub-register per data input x_j, with the single qubit rotation gates parameterized by an exponentially scaling amount. The encoding can be defined as ⊗_j=1^d( ⊗_l=1^m R_𝖷( 5^l-1 x_j) ). Hence, the gradient contained exponentially scaling highest frequency terms, leading to a model where gradient descent techniques took exponential time. However, if considering the expectation value of the first qubit in a sub-register of this model, we note this corresponds to a frequency ω = 1 and hence the respective expectation value for the first qubit would be snapshot invertible. However, in the case of <cit.> the DLA was exponentially large, meaning the model was not snapshot recoverable, hence these snapshots could not be found to then be invertible. Hence, from our new insights, we can conclude that the privacy demonstrated in <cit.> was dependent on having an exponential DLA dimension. However, an exponentially large DLA also led to an untrainable model, limiting the real-world applicability of this previous work. Lastly, recall that Algorithm <ref> in the case poly DLA and slow Pauli expansion is a completely classical snapshot inversion attack for the Fourier tower map. Further, highlighting how snapshot inversion can be easier than direct inversion. We show that both direct input recovery and snapshot inversion are dependent on frequencies ω generated by the encoding circuit, highlighting that this is a key consideration when constructing VQC models. The introduction of high frequency components can be used to slow down methods that obtain approximate local minimum to Eq <ref>. However, for true privacy breakage, it appears that in general we still need to resort to grid search, which becomes exponentially hard with dimension regardless of high-frequency terms. However, for problems with a small amount of input, introducing high frequency terms can be used to also make grid search harder. The idea of introducing large frequencies is a proxy for the more general condition that our results hint at for privacy, which is that the feature map ρ(𝐱') should be untrainable in terms of varying 𝐱'. Notably, cases exist where the same model can have an exponential frequency gradient, but can still contain a certain number of expectation snapshot values with polynomial scaling frequencies. Hence, it is also important to note that merely showing that a model is not direct input recoverable does not guarantee privacy, as one needs to also consider that if the model is snapshot recoverable and that these snapshots may be invertible if sufficient polynomial scaling frequency terms can be recovered. This duality highlights the complexity of ensuring privacy in quantum computing models and stresses the need for a comprehensive analysis of the frequency spectrum in both model construction and evaluation of privacy safeguards. §.§.§ Expectation Value Landscape Numerical Results In this section, we provide a numerical investigation of the impact of high-frequency components in the encoding circuit on the landscape of Eq <ref> for snapshot inversion. The idea is to present examples that move beyond the Fourier tower map. We present two cases of encodings that would generally be difficult to simulate classically. By plotting a given expectation value against a univariate x we can numerically investigate the frequencies produced by both models. In Figure <ref> we demonstrate a circuit in which x parameterizes a single R_𝖷 rotation gate, but on either side of this is an unknown arbitrary unitary matrix acting on n qubits. This would be classically hard to simulate due to the arbitrary unitary matrices; however, the result effectively corresponds to taking measurements on an unknown basis, and using only a few samples of x it is possible to recreate the graph as a single frequency sinusoidal relationship. This results in the distance between the stationary points being r=π for any value of n. This corresponds to a frequency ω = r/π = 1, regardless of the value of n. This circuit therefore exhibits constant frequency scaling independent of n and hence could be easy for gradient-based methods to recovery approximate local min. We briefly give an example of a type of circuit that can generate high-frequency expectation values. Figure <ref> demonstrates a circuit where x parameterizes an SU(2^n) gate. The result when measuring the same expectation value corresponds to a highest frequency term that is exponentially increasing. This is shown in the plot in Figure <ref> in which the distance between stationary points r shrinks exponentially as the number of qubits increases for the SU(2^n) parameterized model, which roughly corresponds to an exponentially increasing highest frequency term. A comparison between the expectation value landscape of the two different encoding architectures, is shown in Figure <ref>, demonstrating that the single rotation gate parameterization, as shown in Figure <ref>, produces a sinusoidal single-frequency distribution, even as the number of qubits is increased; while the SU(2^n) gate parameterization, shown in Figure <ref>, contains exponentially increasing frequency terms. A visual representation for the multivariate case is also demonstrated in Figure <ref> which shows the expectation value landscape when two input parameters are adjusted, for a model comprised of two different SU(2^n) parameterized gates parameterized by the variables x_1 and x_2 respectively, demonstrating that as more qubits are used, the frequencies of the model increase and hence so does the difficulty of finding a solution using gradient descent techniques. The two example circuits demonstrate encoding circuits that are hard to simulate, and hence no analytical expression for the expectation values can be easily found. These models do not admit classical snapshot inversion; however, by sampling expectation values it may be possible to variationally perform quantum-assisted snapshot inversion. Whether numerical snapshot inversion can be performed efficiently will likely be affected by the highest frequency ω inherent in the encoding, which will itself depend on the architecture of the encoding circuit. This suggests that designing encoding circuits such that they contain high-frequency components is beneficial in high-privacy designs. We have shown that SU(2^n) parameterized gates can produce high-frequency terms, whereas single-qubit encoding gates will be severely limited in the frequencies they produce. § DISCUSSION We utilize this section to draw the connections between the two key properties of VQC: trainability, i.e., the lack of barren plateaus <cit.>, and the ability to retain privacy of input. Building upon this connection, we discuss the prospects and future of achieving robust privacy guarantees with VQC models. §.§ Connections between Trainability and Privacy in VQC Solely requiring a machine learning model to be private is not sufficient to deploy it for a practical use case of distributed learning such as federated learning. A key requirement in this collaborative learning scenario is also to ensure that the model remains trainable. A plethora of works have gone into exactly characterizing the trainability of VQC models by analyzing the presence of barren plateau in the VQC model, starting from the work of <cit.> and culminating in the works of <cit.>. Especially, the work of <cit.> provides an exact expression of the variance of the gradient of the model when the VQC is constrained to be in the LASA case, the details of which we also provide in Appendix <ref> for completeness. A key insight into these works suggests that LASA models, with exponentially-sized DLA, may lead to the presence of barren plateaus (see Theorem <ref> under Appendix <ref>), drastically deteriorating the trainability of such models. Within our privacy framework centered around snapshot recoverability (Sec<ref>), we also show via Theorem <ref> that LASA models with an exponential size DLA are not classically snapshot recoverable, although this may lead to untrainable models. We can therefore conclude that a possible condition for protection against classical input recovery using gradients in a VQC model is to choose an ansatz that exhibits an exponentially large dynamical Lie algebra dimension, as this would render snapshot recovery difficult. Through our framework, we can see that previous works <cit.> effectively relied on this property to ensure privacy. Combining the concept of trainability leads to the following corollary on the privacy of VQC models: Any trainable VQC on n qubits that satisfies the LASA condition in Def <ref>, fulfills the slow Pauli expansion condition as highlighted in Def <ref>, and has a DLA whose dimension scales as 𝒪(poly(n)), would admit snapshot recoverability with complexity 𝒪(poly(n)). Hence we can conclude that, at least in the LASA case of VQC, the privacy of the model is linked to the DLA dimension and furthermore that there is a direct tradeoff between privacy and trainability of the model. As exponentially sized DLA models are expected to be untrainable in the LASA case, this means that for realistic applications, it does not seem feasible to rely on quantum privacy derived from an exponential DLA precluding snapshot recoverability in the model. This suggests that any privacy enhancement from quantum VQCs should not derive itself from the variational part of the circuit for LASA type models that are intended to be trainable. In other words, we expect the majority of trainable VQC models to be vulnerable to weak privacy breaches. The privacy of variational models beyond the LASA case becomes linked to a larger question within the field, notably, whether there exist quantum variational models that are not classically simulatable and do not have barren plateaus <cit.>. It is also worth noting that if one attempted to create a model that is not snapshot recoverable by ensuring that D < dim(𝔤), and hence an underparameterized system of equations, it would effectively lead to an underparameterized model. A model is underparmeterized when there are not enough variational parameters to fully explore the space generated by the DLA of the ansatz, which is a property that may not be desirable for machine learning models <cit.>. §.§ Future Direction of VQC Quantum Privacy Due to the above argument suggesting that achieving privacy via an exponentially large DLA may cause trainability issues in the underlying model, it appears that future improvements in privacy using VQC may primarily focus on preventing the snapshot inversion step, as we highlight in Sec <ref>. This promotes a focus on the encoding circuit architectures of the VQC in order to prevent the model admitting snapshot inversion to facilitate input recovery. We have explicitly shown the necessary condition required to achieve privacy from purely classical attacks. If it is not possible to classically simulate the expectation values of the quantum encoded state with respect to the DLA basis elements of the variational circuit, then it will not be possible to attempt classical analytical or numerical inversion attacks. Any VQC designed where these expectation values cannot be simulated will, therefore, be protected against any purely classical snapshot inversion attempts. This condition can therefore prevent strong privacy breaches, as long as the attacking agent only has access to a classical device. In the case where the attacker can simulate expectation values of the DLA basis or has access to a quantum device to obtain the expectation values, then numerical classical snapshot inversion or numerical quantum-assisted snapshot inversion can be attempted, respectively. We have shown that in this case an important factor in preventing these techniques is that the expectation values have exponentially scaling frequency terms resulting in the attacks requiring to solve a system of high-degree polynomial equations. The implication of this is that to achieve useful privacy advantage in VQC it may require that the encoding circuit is constructed in such a way that the expectation values of the DLA basis elements of the variational circuit contain frequency terms that scale exponentially. Notably, we find that having high frequency terms in the gradients, as suggested in the encoding circuit of <cit.>, does not necessarily protect against numerical snapshot inversion attacks. This is because the gradients inherit the highest frequency term from all expectation values, but there may be a sufficient number of polynomial frequency expectation values to perform snapshot inversion, even if direct input inversion is not possible. Unlike the variational case, where a connection between DLA dimension and trainability has been established, the effect that privacy-enhancing quantum encodings would have on the trainability of a model is less clear. If the majority of expectation values used in model contain exponentially large frequencies, then this potentially restricts the model to certain datasets. In classical machine learning, there have been positive results using trigonometric feature maps to classify high frequency data in low dimensions <cit.>. It remains as a question for future research the types of data which may be trained appropriately using the privacy-preserving high frequency feature maps proposed. If models of this form are indeed limited in number, then the prospects for achieving input privacy from VQC models appear to be limited. More generally the prospect for quantum privacy rests on feature maps that are untrainable with regard to adjusting 𝐱' to recover expectation values 𝐞_snap, while at the same time remaining useful feature maps with respect to the underlying dataset and overall model. § CONCLUSION In this research, we conducted a detailed exploration of the privacy safeguards inherent in VQC models regarding the recovery of original input data from observed gradient information. Our primary objective was to develop a systematic framework capable of assessing the vulnerability of these quantum models to general class of inversion attacks, specifically through introducing the snapshot recovery and snapshot inversion attack techniques, which primarily depend on the variational and encoding architectures, respectively. Our analysis began by establishing the feasibility of recovering snapshot expectation values from the model gradients under the LASA assumption. We demonstrated that such recovery is viable when the Lie algebra dimension of the variational circuit exhibits polynomial scaling in the number of qubits. This result underscores the importance of algebraic structure in determining the potential for privacy breaches in quantum computational models. Furthermore, due to the fact that a polynomial scaling DLA dimension is commonly required for models to be trainable, our results suggest that a trade-off may exist between privacy and trainability of VQC models. Assuming one insists on a polynomial-sized DLA, our framework suggests that a weak privacy breach will always be possible for the type of VQC model studied. To ensure privacy of the model overall, one cannot rely on the variational circuit and needs to instead focus more on the encoding architecture and ensuring snapshot inversion cannot be performed. If snapshot inversion is not possible, then at least strong privacy breaches can be prevented. We then explored snapshot inversion, where the task is to find the original input from the snapshot expectation values, effectively inverting the encoding procedure. Studying widely used encoding ansatz, such as the local multiqubit Pauli encoding, we found that under the conditions that a fixed subset of the data paramaterizes a constituent state which has sufficient overlap with the DLA, and the number of gates used to encode each dimension of the input 𝐱 was polynomial, that snapshot inversion was possible in 𝒪(poly(n, log(1/ϵ)) time. This shows that a potentially wide range of encoding circuits are vulnerable to strong privacy breaches and brings their usage in privacy-focused models into question. For the most general encoding, which we approached as a black-box optimization problem, we demonstrated that using perturbed gradient descent to find a solution is constrained by the frequency terms within the expectation value Fourier spectrum. In general for exactly finding 𝐱 it appears that a grid search would be required. Although we cannot provide strict sufficient conditions due to the possibility of unfavorable local minima with perturbed gradient, we note that gradient descent for snapshot inversion may, in some cases, be easier to perform for snapshot inversion than for direct input data recovery from the gradients. This simplification arises because gradients can inherit the highest frequency term from the snapshots, potentially leading to scenarios where the gradient term contains exponentially large frequencies. However, there may still be sufficient polynomial frequency snapshots to permit snapshot inversion. This shifts the focus in attack models away from direct input recovery from gradients, a common approach in classical privacy analysis, towards performing snapshot inversion as detailed in this study as a potentially more efficient attack method. The dual investigation allowed us to construct a robust evaluative framework that not only facilitates the assessment of existing VQC models for privacy vulnerabilities but also aids in the conceptualization and development of new models where privacy is a critical concern. Our reevaluation of previous studies, such as those cited in <cit.>, through the lens of our new framework, reveals that the privacy mechanisms employed, namely the utilization of high-frequency components and exponentially large DLA, effectively prevent input data recovery via a lack of snapshot recoverability, but at the same time contribute to an untrainable model of limited practical use. In conclusion, we offer a methodological approach for classifying and analyzing the privacy features of VQC models, presenting conditions for weak and strong privacy breaches for a broad spectrum of possible VQC architectures. Our findings not only enhance the understanding of quantum privacy mechanisms but also offer strategic guidelines for the design of quantum circuits that prioritize security while at the same time maintaining trainability. Looking ahead, this research paves the way for more robust quantum machine learning model designs, where privacy and functionality are balanced. This knowledge offers the potential to deliver effective machine learning models that simultaneously demonstrate a privacy advantage over conventional classical methods. § ACKNOWLEDGMENTS The authors thank Brandon Augustino, Raymond Rudy Putra and the rest of our colleagues at the Global Technology Applied Research Center of JPMorgan Chase for helpful comments and discussions. § DISCLAIMER This paper was prepared for informational purposes by the Global Technology Applied Research center of JPMorgan Chase & Co. This paper is not a product of the Research Department of JPMorgan Chase & Co. or its affiliates. Neither JPMorgan Chase & Co. nor any of its affiliates makes any explicit or implied representation or warranty and none of them accept any liability in connection with this paper, including, without limitation, with respect to the completeness, accuracy, or reliability of the information contained herein and the potential legal, compliance, tax, or accounting effects thereof. This document is not intended as investment research or investment advice, or as a recommendation, offer, or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction. ieeetr § NOTATION USED IN THE WORK § GENERAL COST FUNCTION In this section we go into more detail regarding how the choice of cost function would affect the attack procedure. In practical real-world examples, we would most likely have gradients with respect to some cost function ∂(y_θ, y)/∂θ rather than gradients in the form C_j = ∂ y_θ/∂θ that are used in this paper (which would correspond to a linear cost function of the form (y_θ, y) = y_θ - g(y), where g(y) is any function independent of y_θ). We briefly show here that with a slight modification our results hold for any differentiable cost function. Using the chain rule we can see that the θ_j gradient can be written C̃_j = ∂(y_θ, y)/∂θ_j = ∂(y_θ, y)/∂ y_θ·∂ y_θ/∂θ_j. If the value of ∂(y_θ, y)/∂ y_θ can be calculated, which would be possible if we were given the output of the model y_θ, along with the data labels y, then it would be possible to directly convert between C̃_j and C_j. When this is not the case, it is still possible to attempt a solution, although it requires one additional gradient than usual. We show this by considering that the ∂(y_θ, y)/∂ y_θ term, although unknown, will be the same for all θ_j. Hence we can eliminate this term considering ratios of the known gradients defined as R_j ≡C̃_j/C̃_1 = ∂ y_θ/∂θ_j/∂ y_θ/∂θ_1. This allows us to write new equations for all j >1 without this unknown term as ∂ y_θ/∂θ_j= R_j ·∂ y_θ/∂θ_1, hence in the notation used throughout this work C_j = R_j · C_1. In Eq <ref> it was previously shown that the C_j gradients can be written in terms of snapshot components as C_j = ∑_t=1^dim(𝔤)χ_t^(j)( 𝐞_snap)_t. Hence, given dim(𝔤)+1 gradients C_j, we can construct dim(𝔤) simultaneous equations C_j - R_j · C_1 = ∑_t=1^dim(𝔤)( χ_t^(j)( 𝐞_snap)_t - R_j χ_t^(1)( 𝐞_snap)_t ) = 0 . Hence, we have a system of equations that can be solved as before, although as we can only technically find the ratios between gradients we will require dim(𝔤)+1 gradients as opposed to dim(𝔤) gradients. § HIGH FREQUENCY TRIGONOMETRIC MODELS The Fourier series picture was utilized in <cit.> to analyze the privacy of VQC models. In this section, we examine this picture with regards to privacy analysis. Quantum models admit a natural decomposition into a Fourier series, which means that if the model is constructed such that it contains high-frequency terms, then it gains certain privacy advantages <cit.>. This is not a uniquely quantum effect however, and in this section, we shall show how it can be recreated with a classical linear classifier equipped with a trigonometric feature map. §.§ Quantum Models are Linear Classifiers with Trigonometric Feature Maps The connection between Quantum models and Fourier series has previously informed the construction of certain dequantisation techniques <cit.>. Classical surrogate models are capable of being run on classical machines and are simultaneously able to match the output of a quantum model to within some error. It is known that the model output of a QML model as a Fourier series <cit.> in the following form y(θ) = ∑_ω∈Ω A_ω e^i ω x, where the A_ω coefficients depend on the entire VQC architecture, but the frequency spectrum Ω depends only on the encoding architecture. If we define b_ω(θ) := A_ω(θ) + A_-ω (θ), d_ω(θ) := i(A_ω(θ) - A_-ω (θ)). Then the model output can be written as y(θ) = a_0 + ∑_ω∈Ω( b_ωcos (ω x) + d_ωsin(ω x)). We can rewrite the model as a linear regression <cit.> defined by the inner product between a vector of coefficients A and a trigonometric feature map ϕ( x) as follows y(θ) = A ·ϕ( x), where A = √(| Ω|)(a_0, b_ω_1, d_ω_1 ... b_ω_δ, d_ω_δ), ϕ( x) = 1/√(| Ω|)( 1, cos( ω_1 x), sin( ω_1 x),..., cos( ω_δ x), sin( ω_δ x)). Hence, a VQC model can effectively be thought of as a linear regression on data that has been transformed by a trigonometric feature map ϕ( x). However, a key difference is that the coefficients in this linear regression model are not directly varied in the quantum case, but the coefficients A have a dependency on the θ values in the variational circuit and therefore evolve as the θ values are adjusted during training. §.§ Classical High Frequency Trigonometric Models We consider the case of a fully classical model with a classical attack model. In this regime, we encode the data using some feature map ϕ(x) and then perform linear regression on the resulting input. This can be considered as a single-layer neural network with a single output neuron with the identity of an activation. Note that in the quantum case, the corresponding ϕ(x) is exponentially large. If we maintained this condition here, then the classical model would not even be able to be evaluated (as even storing ϕ(x) would require exponential resources), and hence in some sense, we would achieve a trivial form of privacy via considering a model we do not have the resources to run in the first place. We therefore assume a polynomial number of ϕ(x) values in the classical model. This setup,while technically distinct, is reminiscent of the technique of Random Fourier Features <cit.> which been shown to be successful in creating classical surrogates of quantum models by utilizing a polynomial number of Fourier features. The classical model output is defined by y(θ) = A ·ϕ(x), where in the classical case the coefficients A are adjusted variationally directly, we are effectively performing a linear regression. We can write the gradients as C_j = ∂/∂ A_j = -2(y_i - 𝐀·ϕ( x)) [ ϕ( x) ]_j. This allows simple relations between all the parameters to be written using the fact that C_k/C_l = [ ϕ( x) ]_k/[ ϕ( x) ]_l, which if we substitute into the normalisation condition |ϕ(x)|^2 = 1. Then we obtain a closed-form solution [ ϕ( x) ]_i = C_i/∑_j C_j^2. Hence, it is easy to recover the ϕ(x) values from the gradients. If we have ϕ(x) then we can also simply invert the equation to find x as if we know that [ ϕ( x) ]_k = cos(ω_k x) then we can find x = 1/ω_kcos ^-1([ ϕ( x) ]_k). This demonstrates that the feature map in the classical case is both retrievable and invertible (up to periodicity unless we enforce the feature map to be injective) and hence there is no analytical privacy in the feature space. An extension of this would be to use a neural network with the feature map ϕ(x) for the inputs. As long as the ϕ(x) values can be recovered by considering the neural network equations, then it will be possible to invert his feature map. Trigonometric feature maps of this structure have found success in some classical machine learning applications, such as learning high-frequency functions in low-dimensional spaces <cit.>. §.§ Quantum High Frequency Trigonometric Models In the quantum case, if one assumes an exponentially large ϕ(x) generated by the quantum variational model as was the case in <cit.>. Then this would not be a problem for a classical model using a trigonometric feature map as the ratio of specific gradients could be taken such that, C_k/C_k+1 = [ ϕ( x) ]_k/[ ϕ( x) ]_k+1 = sin (ω_k x)/cos (ω_k x) = tan (ω_k x), which would still allow one to invert and find x as x = 1/ω_kcos ^-1( C_k/C_k+1), using only two gradient values. However, in a quantum variational circuit, we do not train the Fourier coefficients A directly, but we train the gate parameters θ that themselves influence the coefficients A. Hence, we cannot perform the same technique to recover this ratio of feature vector elements. If we knew the coefficients A and ∂/∂ A_j, then it would be an easy task. However, we only have access to the θ gradients that can be written C_j = ∂/∂θ_j = -2(y_i - A ·ϕ( x)) ( ∂ A/∂θ_j·ϕ( x) ). A solution to this system of equations would require calculating the inner product between exponentially large vectors. Even if one attempts a similar trick to the classical case to eliminate the A ·ϕ( x) term this would yield C_k/C_l = ( ∂ A/∂θ_k·ϕ( x) )/( ∂ A/∂θ_l·ϕ( x) ), which would still require calculating the inner product of exponentially large vectors (before even considering the challenge of finding the ∂ A/∂θ_j terms). Hence, even if the feature map ϕ(x) is easy to invert, we see that the privacy of quantum variational circuits in the trigonometric feature map space can derive from the fact ϕ(x) is not recoverable. This is analogous to the snapshot recovery discussed in the main work. While both classical and quantum high frequency trigonometric models provide protection against machine learning attacks and analytical Attacks in the input space. The exponential nature of quantum models provides analytical privacy in the trigonometric feature vector space, while polynomial-sized classical models do not exhibit privacy in feature map space and are feature vector recoverable. This section has shown explicitly how an exponential number of frequencies provides protection in the quantum case, against analytical attacks, when considering the Fourier space. However, they may still be vulnerable to the snapshot recovery and inversion techniques specified in the main text. §.§ Results on Trainability of VQCs One of the main trainability problems that plague VQCs is exponentially vanishing gradients, more commonly known as the barren plateau (BP) problem <cit.>. The BP problem has been characterized <cit.> for a restricted class of VQCs known as Lie algebraic supported ansatz (LASA), which cover a wide variety of commonly used models. Hence, when looking for trainable quantum models, it is easiest to restrict the LASA setting, which is what is done in this paper. Furthermore, it can be shown that under a stronger yet potentially more reasonable definition of vanishing gradients for VQCs, a necessary condition for a LASA to avoid a BP is to have a dynamical Lie algebra (DLA) with a polynomial dimension. It has been conjectured that this claim is far-reaching, in the sense that models that avoid BPs evolve within spaces that are polynomially sized, in terms of the number of qubits <cit.>. The following theorem presents a closed form expression for the variance, over uniform parameter initialization, of the quantum circuit gradient for input 𝐱, denoted _𝐱. Consider ansatz 𝐔(θ) with DLA admitting a decomposition into simple ideals _α and its center 𝔠. Given the input state ρ(𝐱) and if the measurement operator iO∈ (LASA condition), then the variance of the gradient for the classical input 𝐱, denoted by _𝐱, is given by, _𝐱 = ∑_α𝐇__α^2_K𝐎__α^2_F ρ__α(𝐱)^2_F/(_α)^2, where the subscript α under the operators 𝐇, 𝐎 and ρ denotes the orthogonal projection (under Frobenius inner product) onto the ideal _α⊆, Further, 𝐇^2_K is the Killing norm <cit.> of the generator of the parameter θ with respect to the gradient which is computed, i.e., e^-θ i𝐇 in the ansatz 𝐔(θ). The Killing norm is defined to be the Frobenius norm of the operator _i 𝐇. A VQC is defined to be trainable when gradients can be efficiently estimated. The VQC is considered trainable for input 𝐱 if GradVar satisfies the condition, _𝐱 = 𝒪(1/poly(n)), where n is the number of qubits. [DLAs are Reductive <cit.>] The DLA admits the following orthogonal (in Frobenius inner product) decomposition: = ⊕_α_α⊕𝔠, where each _α⊂ is a simple ideal (a minimal Lie subalgebra satisfying ∀𝐇∈, ∀𝐊∈_α, [𝐇, 𝐊 ] ∈_α) and 𝔠⊂ is the center of 𝔤. This expression reveals the explicit dependence that the gradient variance has on the DLA dimension in the LASA setting, which is the reasoning behind the conjectured necessity that the DLA dimension must be polynomial to avoid BPs. More specifically, we can immediately see from Theorem <ref> that as long as dim(𝔤) scales polynomially in n, and the Frobenius norm of the projection of input state in the DLA as well as the measurement operator is not vanishingly small in n, the variance of the gradient does not decay exponentially in n and thus leads to a trainable model. Another source of untrainability of the model is when the cost function optimization landscape is swamped with spurious local minima which renders it unamenable to any efficient optimizer to find a good approximation to the optimal solution. As shown by <cit.>, models in the underparameterized phase characterized by trainable parameters fewer than the degrees of freedom of the system exhibit the above spurious local minima behavior. A phase transition occurs when the number of parameters scale with the system's degrees of freedom where the local minima concentrate around the global minimum rendering it more efficient for the optimizers to converge to a good approximate solution. This is referred to as the overparameterized phase. The work by <cit.> shows that overparameterization can be achieved with the number of model parameters scaling as the size of the DLA dimension. Thus for poly(n) sized DLA, overparameterization requires polynomial ansatz depth, thus ensuring trainability.
http://arxiv.org/abs/2405.10081v1
20240516132441
General relativistic magnetohydodynamics simulations for binary neutron star mergers
[ "Kenta Kiuchi" ]
astro-ph.HE
[ "astro-ph.HE", "gr-qc" ]
Kenta Kiuchi () Max-Planck Institute for Gravitational Physics, Am Mühlenberg 2, 10559, Potsdam, Germanykenta.kiuchi@aei.mpg.de General relativistic magnetohydodynamics simulations for binary neutron star mergers Kenta Kiuchi May 20, 2024 ==================================================================================== Binary neutron star mergers used to be the most promising candidate for gravitational waves for ground-based gravitational wave detectors, such as advanced LIGO and advanced VIRGO. This was proved by the detection of gravitational waves from a binary neutron star merger in 2017. Numerical modeling is pivotal in predicting and interpreting binary neutron star mergers. This chapter reviews the progress of fully general relativistic magnetized binary neutron star merger simulations. From 2008 to 2024, about forty numerical relativity simulations of magnetized binary neutron star mergers were conducted with a different level of sophistication. This chapter aims to comprehensively view the magnetohydrodynamics effect in binary neutron star mergers by reviewing all the related works. § INTRODUCTION After the observation of GW170817 associated with the electromagnetic counterparts GRB 170817A and AT 2017gfo <cit.>, binary neutron star mergers became a leading player in the multimessenger era. Numerical relativity is a chosen way to construct a theoretical modeling of binary neutron star mergers. All fundamental interactions are equally essential in binary neutron star mergers. Thus, the numerical relativity codes should self-consistently implement all the effects of the fundamental interactions. The numerical relativity community started making an effort to build a physical model of binary neutron star mergers before the GW170817 event from two aspects. One is the magnetohydrodynamics (MHD) effect in binary neutron star mergers. It is motivated by the binary pulsars' observational fact that neutron stars generally have magnetic fields <cit.>. The community is trying to figure out what the MHD effect is in binary neutron star mergers. The other aspect is microphysics, i.e., the neutrino emission process (see, for example, Refs. <cit.> as a review for the progress of this aspect). Since 2015, it has become feasible to perform a simulation that combines the two aspects, i.e., the MHD and neutrino effect <cit.>. General relativistic magnetohydrodynamics (GRMHD) in full general relativity was initiated in 2005 <cit.>. Subsequently, a couple of numerical relativity codes successfully implemented GRMHD <cit.>, and started to explore binary neutron star mergers. The modeling of magnetized binary neutron star mergers is classified into three categories: (i) Three-dimensional binary neutron star merger simulations, particularly focusing on the merger dynamics <cit.>. (ii) Simulation for binary neutron star merger remnants, in which the remnant is manually constructed by an equilibrium configuration or mapping from the three-dimensional merger simulations <cit.>. (iii) Force-free simulation of inspiraling binary neutron star mergers, whose aim is to explore the precursor signal before the merger <cit.>. Since GW170817, the importance of a self-consistent modeling of binary neutron star mergers keeps growing because it is mandatory to compare numerical relativity simulations to observational data for interpreting/predicting gravitational wave events. Therefore, the numerical relativity community is responsible for conducting simulations that are quantitatively accurate enough for such a comparison. However, it is still far on the way. For this reason, in this chapter, we mainly review the category (i). After initiation of the GRMHD simulation in binary neutron star mergers, this field is rapidly evolving, but at the same time the situation is “chaotic". Namely, there is no clear consensus among different numerical relativity groups on the role of MHD instabilities, jet launching, and neutron-rich matter ejection. Therefore, in this chapter, we intend to review all the related works with criticism as much as possible and try to seek a way to deepen our understanding of the MHD effect in binary neutron star mergers. Readers may find a detailed discussion of the formulation, methodology, and implementation in the other chapters. Also, readers may be interested in the review of GRMHD simulations of binary neutron star mergers <cit.>. Despite its pioneering, we will not review Newtonian magnetized binary simulations. The notation in this section follows a standard notation in this field. For example, B^i, R, ρ, Ω, and v^i denote the magnetic field, the radius, the rest-mass density, the angular velocity, and velocity, respectively. The structure of this chapter is as follows. In Sec. <ref>, we review a couple of crucial ingredients for the MHD effect in the context of binary neutron star mergers. Section <ref> is devoted to the review of the magnetized binary neutron star merger simulations conducted to date. In Sec. <ref>, we summarize the current status of our understanding of the MHD effect in binary neutron star mergers and discuss prospects. § MAGNETOHYDRODYNAMICS INSTABILITIES, NEUTRINO EMISSION, AND LARGE-SCALE DYNAMO This section reviews several relevant ingredients in binary neutron star mergers. The first one is magnetohydrodynamics (MHD) instabilities. We begin by estimating the magnetic-field energy in the pre-merger stage with the observed magnetic-field strength in the binary pulsars <cit.>. E_ mag≈ 4 × 10^41  erg B_12^2 R_6^3, where B_12=B/10^12 G and R_6=R/10^6 cm, respectively. The magnetic field strength in the observed binary pulsars is in the range of ∼ 10^7–10^12  G <cit.>. The typical kinetic energy just before the merger is E_kin≈ 2× 10^53  erg M_2.7 v^2_10, where M_2.7=M/2.7M_⊙ and v_10=v/10^10 cm s^-1. Therefore, the magnetic field is dynamically unimportant in the pre-merger stage unless we consider an unrealistically ultra-strong magnetic field. Several MHD instabilities are proposed, which could amplify the magnetic field efficiently in a short time scale up to the saturation level, implying that the magnetic field could become dynamically important during and after the merger. In Ref. <cit.>, Rasio and Shapiro first proposed the Kelvin-Helmholtz instability could amplify the magnetic field when the two neutron stars collide with each other, and the shear layer is formed at the contact interface unless the binary neutron star promptly collapses to a black hole <cit.>. Because the growth rate of this instability is proportional to the wavenumber, the small-scale vortices could grow in a much shorter time scale than the relevant dynamical timescale. These vortices could curl the magnetic field lines, which results in an efficient small-scale magnetic field amplification. Therefore, to confirm this picture in a grid-based simulation, the spatial grid resolution is a crucial ingredient. The Kelvin-Helmholtz instability phase is not expected to continue during an entire post-merger stage because the shock waves generated by the colliding motion of the two neutron stars would dissipate the shear layer. Namely, the shear layer would survive only for a particular timescale. The timescale strongly depends on binary neutron star models such as an equation of state of nuclear matter, the mass of the neutron star, and neutron star spin in the pre-merger stage. For example, if we consider a “soft" equation of state, the approaching velocity becomes faster, compared to a “stiff" equation of state, when the two neutron stars collide because the neutron stars are compact <cit.>. Consequently, the timescale for which the shear layer survives becomes shorter. It implies that the grid resolution requirement to resolve the Kelvin-Helmholtz instability becomes challenging in such a case because we expect the instability would amplify the magnetic field up to the saturation level within the lifetime of the shear layer in reality. Pre-merger magnetic-field topology should be explored in the Kelvin-Helmholtz instability. However, a careful assessment is necessary to explore the saturation due to the Kelvin-Helmholtz instability. After the Kelvin-Helmholtz instability phase, the merger remnant could be subject to the magneto-rotational instability (MRI) <cit.> because it differentially rotates in general <cit.>. The typical MRI wavelength of the axisymmetric fastest-growing mode is λ_MRI = B_p/√(4πρ)2π/Ω≈ 8 × 10^3  cm B_p,15ρ_15^1/2Ω_8000^-1, where B_p,15=B_p/10^15  G, ρ_15=ρ/10^15  g cm^-3, and Ω_8000=Ω/8000  rad s^-1. In reality, the Kelvin-Helmholtz instability saturation could initialize the poloidal magnetic-field strength B_p in the MRI phase. In a grid-based numerical relativity simulation, the spatial grid resolution is again a key ingredient in properly resolving the MRI. MRI quality factor Q_MRI quantifies the ability of the employed grid resolution to resolve the MRI: Q_MRI = λ_MRI/Δ x. The critical MRI quality factor below which the MRI-driven turbulence cannot be sustained is ≈ 8–10 <cit.>. It should be noted that without estimating the MRI quality factor, it is hard to quantify how the MRI turbulence is sustained (see also the discussion about Q_ MRI in the binary neutron star merger context below). One caveat on this estimate based on the ideal MRI assumption is that the neutrino viscosity and drag could significantly suppress the MRI as diffusive and damping processes in reality <cit.>. The neutrino viscosity (drag) becomes relevant when the neutrino mean free path becomes shorter (longer) than λ_ MRI. The dispersion relation of the viscous- and damping-MRI are [(σ̃_ vis+k̃^2_ visν̃_ν)σ̃_ vis+k̃^2_ vis]^2+κ̃^2[σ̃^2_ vis+k̃^2_ vis]-4k̃^2_ vis=0, and [(σ̃_ drag+Γ̃_ν)σ̃_ drag+k̃^2_ drag]^2+κ̃^2[σ̃^2_ drag+k̃^2_ drag]-4k̃^2_ drag=0, respectively, where σ̃_ vis/drag=σ_ vis/drag/Ω, k̃_ vis/drag=k_ vis/dragv_A/Ω, κ̃^2=κ^2/Ω^2, ν̃_ν=ν_νΩ/v_A^2, and Γ̃_ν=Γ_ν/Ω. σ_ vis/drag and k_ vis/drag are the growth rate and wave number of MRI. v_A=B^z_hyp/√(4πρ) is the Alfvén wave speed with the hypothetical poloidal magnetic-field strength B^z_hyp. κ^2 is the epicyclic frequency. ν_ν and Γ_ν are the neutrino viscosity and drag damping rates, respectively. Reference <cit.> calibrated them in a one-dimensional supernova simulation with an ab initio neutrino radiation transport: ν_ν = 1.2× 10^10 cm^2 s^-1 ρ_13^-2T_10^2, Γ_ν=6× 10^3 s^-1 T_10^6, where ρ_13=ρ/10^13  g cm^-3 and T_10=T/10  MeV. The neutrino mean free path l_ν is also calibrated by l_ν = 2× 10^3 cm ρ_13^-1T_10^-2. Therefore, with microphysics, i.e., finite temperature effect and neutrino emission, it is possible to estimate the growth rate of the non-ideal MRI. It suggests the importance of implementing microphysics in magnetized binary neutron star merger simulations. It should be noted that the MRI is relevant to sustain the magneto-turbulence inside the merger remnant because the resultant turbulent viscosity facilitates the angular momentum transport and heats the matter inside the merger remnant. In other words, the MRI's linear, exponential growing phase could be irrelevant in the binary merger context unless the Kelvin-Helmholtz instability is suppressed. The Shakura-Sunyaev parameter is the other important diagnostics to quantify the effective turbulent viscosity driven by MRI <cit.>: α_SS≡ -⟨b^(r)b_(ϕ)/4π P⟩, where P is the pressure, and b_(i) is the tetrad component of the magnetic field measured in the fluid-rest frame. ⟨·⟩ denotes the time ensemble. Besides the turbulent viscosity, MRI-driven turbulence would play another important role. Since the Kelvin-Helmholtz instability could generate a strong, but small-scale, magnetic field in a short timescale, a mechanism is necessary to convert such a small-scale field to a large one for the magnetic field to be dynamically important inside the merger remnant. The mean-field dynamo theory, in which each physical quantity Q is decomposed into the mean component Q̅ and fluctuating component q, i.e., Q=Q̅+q, is a candidate for such a mechanism. We assume that an average of Q in the azimuthal direction gives the mean field Q̅ in the binary neutron star merger context because the merger remnant has an approximately axisymmetric structure. Thus, in the framework of the ideal MHD, we cast the induction equations as ∂_t B̅ = ∇×(V̅×B̅+ℰ̅), where ℰ̅=v×b is the electromotive force generated by the fluctuations. B=B̅+b is the magnetic field and V=V̅+v is the velocity field. The simplest mean-field dynamo is αΩ dynamo in the context of Solar physics, and we express the electromotive force as a function of the mean magnetic field: ℰ̅_i = α_ijB̅_j + β_ij(∇×B̅)_j, where α_ij and β_ij are tensors that should not depend on the mean magnetic field <cit.>. The former tensor is called dynamo α, and the second term is turbulent resitivity. If we assume the diagonal component of the first term on the right-hand side is dominant and a purely cylindrical differential rotation V̅=RΩe_ϕ, we can reduce Eq. (<ref>) into ∂_t B̅_R = - ∂_z [(V̅×B̅)_ϕ + ℰ̅_ϕ]≈-∂_z(α_ϕϕB̅_ϕ), ∂_t B̅_z = ∂_R [(V̅×B̅)_ϕ + ℰ̅_ϕ] ≈∂_R(α_ϕϕB̅_ϕ), ∂_t B̅_ϕ = R B̅_A ∂_A Ω, where we work in the cylindrical coordinates, ∂_ϕ=0, and A=R,z. These equations tell us that the mean poloidal magnetic field B̅_A is generated by the electromotive force described by the dynamo α and the mean toroidal field B̅_ϕ. It is called the α effect. The mean toroidal field B̅_ϕ is generated by the differential rotation and the mean poloidal field B̅_A. It is called the Ω effect or merely magnetic winding. Therefore, if it is realized inside the binary neutron star merger remnant, the dynamo cycle is closed, and the large-scale (mean-field) magnetic field is generated. In the binary neutron star merger context, the MRI could produce and sustain the turbulence, i.e., v and b, which is the key to generating the electromotive force ℰ̅. Since the turbulence is easily killed by the numerical diffusion in a simulation, the high-resolution simulation is the key to exploring the magnetic field evolution in the binary neutron star merger context. We should mention the relevance of the convergence study for the spatial grid resolution to explore the MHD effect in binary neutron star mergers. We begin by estimating the magnetic winding timescale originating from a pre-merger large-scale magnetic field. Suppose we consider a binary neutron star merger with the highly magnetized end of 10^12 G in the observed binary pulsars. The winding timescale t_A is t_A ∼ R/v_A ∼ 100  s B̅_R,12^-1ρ_15^1/2 R_6, where B̅_R,12=B̅_R/10^12  G. Therefore, the magnetic winding originating from the pre-merger large-scale magnetic field should be irrelevant during the post-merger evolution within the timescale of O(10) s. However, the existing general relativistic magnetized binary neutron star merger simulations employed the pre-merger magnetic field of 10^15–10^17  G except for the simulations in Refs. <cit.> and the simulations employing a sub-grid model <cit.>. The primarily reason for using such an unrealistic magnetic field strength is to compensate the extensive computational cost to resolve the Kelvin-Helmholtz instability and the MRI (see Eq. (<ref>)). However, the trade-off is to shorten the winding timescale originating from the assumed pre-merger large-scale magnetic field. t_A could become comparable to or shorter than the timescale relevant for the post-merger evolution (see Eq. (<ref>)). Therefore, if we employ a single grid resolution with the pre-merger large-scale magnetic field of 10^15–10^17  G, we can not disentangle whether the magnetic winding originating from such a large-scale magnetic field determines the subsequent evolution of the merger remnant or not. Since the magnetic winding is relatively easy to resolve numerically, the convergence study is essential to disentangle the abovementioned point. Namely, suppose the magnetic winding originating from the pre-merger large-scale field plays a primary role in the subsequent merger remnant evolution. In that case, the result should not be significantly different in the different resolution simulations. If resolving the small-scale turbulence is essential, like the MRI-driven αΩ dynamo mentioned above, the convergence study would give a significantly different result. Therefore, in the following section, we review the previous works by explicitly mentioning the grid resolution, the pre-merger large-scale magnetic field, and the level of sophistication of the microphysics. § MAGNETOHYDRODYNAMICS SIMULATIONS In this section, we review the general relativistic magnetized binary neutron star merger simulations in the framework of the ideal and resistive MHD approximation. To clarify how our community has deepened our understanding of the MHD effect in binary neutron star mergers, we review all the related works chronologically. Before starting the review, we should mention the pre-merger magnetic field topology widely used in the numerical relativity community. The field topology is called a confined field configuration: A_ϕ = A_b max(P-P_ cut,0)^n_s, where A_ϕ is the toroidal component of the vector potential, P is the pressure, P_ cut is the cut-off pressure parameterized by a fraction of the maximum pressure, and the concentration parameter n_s. A_b is chosen so that the desired magnetic field strength is achieved. Together with the assumption of no poloidal vector potential, A_r=A_θ=0, it gives a confined purely poloidal magnetic field. Since it has been revealed that the simulation result is not sensitive, at least qualitatively, to the choice of P_ cut and n_s, the default setting for the pre-merger magnetic field topology is the confined configuration in this review without specifying P_ cut and n_s. Otherwise, we explicitly explain the magnetic field topology. Merger remnant of binary neutron star merger is classified into three classes; a prompt collapse case, a short-lived case, and long-lived case <cit.>. For the prompt collapse case, the merger remnant collapses to a black hole immediately after the merger. For the short-lived case, the remnant massive neutron star would survive within O(0.01)  s, whose timescale would be shorter than the MRI-driven turbulent viscous timescale, the magnetic winding timescale, and the neutrino cooling timescale. [The neutrino cooling timescale is estimated by t_ν, cool∼ E_ thr,53/L_ν,53∼ O(1)  s where E_ thr,53=E_ thr/10^53  erg and L_ν,53=L_ν/10^53  erg/s are the thermal energy and the neutrino luminosity, respectively.] For the long-lived case, the merger remnant massive neutron star survives for O(1-10)  s. Anderson and his collaborators initiated a general relativistic magnetized symmetric binary neutron star merger simulation <cit.>. They employed the grid resolution of Δ x=460  m and the pre-merger magnetic field of 9.6× 10^15  G. The neutron star is modeled by the polytropic equation of state with Γ=2. They concluded the Kelvin-Helmholtz instability amplifies the magnetic field. They also reported the emergence of the Taylor instability with m=1 mode and the magnetic buoyancy instability. They quantified how the magnetic field modifies the post-merger gravitational waveforms. However, because of the lack of a convergence study, there was room to be improved in understanding whether the Kelvin-Hemoholtz instability really activates or not. Subsequently, Liu and his collaborators simulated magnetized symmetric and asymmetric binary neutron star mergers in full general relativity <cit.>. They employed the grid resolution of Δ x ≲ 700  m, and the pre-merger magnetic field of 10^16  G. [Precisely speaking, their simulation result is scale-free because of the normalization with the polytropic constant K.] The neutron star is modeled by the Γ-law with Γ=2. Their conclusion is summarized as follows: (i) For the hypermassive neutron star formation case, the MHD effect can cause observable differences in the dynamics and gravitational waves, but they are not as significant as those reported in Ref. <cit.>. The difference might originate from the initial data of the binary neutron star. On the one hand, in Ref. <cit.>, the quasi-equilibrium configuration was employed as the initial data. On the other hand, Ref. <cit.> set up the initial data by superposing boosted metrics of two spherical neutron stars. (ii) For the prompt black hole formation case, the MHD effect plays a minor role in the case of the symmetric binary. For the asymmetric binary, the MHD effect enhances the disk mass. However, since their employed grid resolution is not fine enough to capture the Kelvin-Helmholtz instability and MRI, as they explicitly mentioned, the role of these instabilities was not explored. Also, since their simulation time is shorter than the magnetic winding timescale t_A, the role of the magnetic winding effect needs to be more clarified. Giacomazzo and his collaborators explored how the pre-merger magnetic field is imprinted in the inspiral gravitational waveforms <cit.>. They employed the grid resolution of Δ x≈ 350  m, and varied the pre-merger magnetic field from 0 to 10^17  G. The neutron star is modeled by the Γ-law equation of state with Γ=2. Their conclusion is that the inspiral gravitational waves are essentially indistinguishable from those in the unmagnetized case unless the pre-merger magnetic field is as strong as 10^17  G. They also investigated the magnetic field amplification via the Kelvin-Helmholtz instability in the shear layer and concluded that the toroidal field is exponentially amplified until it becomes as strong as the poloidal field. In Ref. <cit.>, they extended their work by employing the grid resolution of Δ x = 221  m and varying the pre-merger magnetic field from 10^8 to 10^12  G. The equation of state is the same as that in their previous work. They also, for the first time, performed the convergence test by changing the grid resolutions Δ x=177  m and 354  m. They reported the growth rate of the magnetic field due to the Kelvin-Helmholtz instability is ∼ 0.5  ms^-1, and it does not significantly depend on the grid resolution. They argued the potential reason for this is the shortness of the lifetime of the shear layer. In their particular model, it is ≈ 1  ms. Therefore, the efficiency of the Kelvin-Helmholtz instability is still a riddle. Also, since the employed resolution cannot capture the MRI (<ref>) and the lifetime of the remnant massive neutron star in their “high-mass binaries" is shorter than the magnetic winding timescale (<ref>), the MHD effect in binary neutron star mergers needs to be more clarified. Rezzolla and his collaborators reported the emergence of a jet-like structure in Ref. <cit.>. They employed the pre-merger magnetic field of 10^12  G. The grid resolution was likely Δ x=221  m. Their model collapsed to a black hole after ≈ 7–8  ms, and the subsequent massive torus formation occurred. They argued that the magnetic field exponentially grows inside the massive torus, and the ordered-magnetic field line is generated by a non-linear dynamo. Then, the jet-like structure is produced, which can power short gamma-ray bursts. Although detailed discussions and analyses about the MRI and the dynamo mechanism were absent, the paper has the merit of showing that the binary neutron star merger with a magnetic field may explain some aspects of the gamma-ray burst phenomenology. Giacomazzo and Perna reported a stable magnetar formation in a binary neutron star merger <cit.>. They employed the pre-merger magnetic field of 10^12  G and the single grid resolution with Δ x =225  m. The NS is modeled with the Γ-law equation with Γ=2.75. Since they chose a model in which the total mass is below the maximum mass of the TOV star, the resultant merger remnant neutron star is stable. They found the inefficient amplification via the Kelvin-Helmholtz instability, which is consistent with their previous finding <cit.> and subsequent amplification via the magnetic winding. Although the magnetic field did not reach the magnetar strength of ∼ 10^14–10^15  G at the end of the simulation, approximately ≈ 55  ms after the merger, they claimed the stable magnetar formation is possible because it is expected the magnetic winding and unresolved MRI would amplify the magnetic field furthermore. Kiuchi and his collaborators reported magnetized binary neutron star merger simulations <cit.>. They employed the pre-merger magnetic field with 10^14.5  G, 10^15  G, and 10^16  G. They performed an in-depth convergence study by employing the grid resolution Δ x =70  m, 110  m, and 150  m. The neutron star is modeled with the H4 equation of state <cit.>. They employed the piecewise polytrope prescription for the cold part <cit.> and Γ-law equation for the thermal part. The merger remnant is short-lived. Contrary to the previous works <cit.>, they found that an efficient magnetic-field amplification via the Kelvin-Helmholtz instability. Namely, the higher grid resolution simulation exhibits a larger growth rate of the magnetic field energy. It is consistent with the property of the Kelvin-Helmholtz instability, i.e., the shorter wavelength mode has the larger growth rate. However, even with their highest grid resolution, the saturation of the magnetic field is not achieved (see Fig. <ref>). Also, they, for the first time, estimated the MRI quality factor of the non-axisymmetric mode in the magnetized binary neutron star merger simulations. They confirmed that once the MRI quality factor exceeds the critical value of 10, the magnetic field exponentially increases inside the merger remnant (see Fig. 3 in Ref. <cit.>). They also reported that even with their highest grid resolution, the MRI in a high-density region with ρ≥ 10^13  g cm^-3 cannot be captured because of the shortness of the MRI wavelength. They also reported the absence of the jet launching until the end of the simulation of ≈ 50  ms after the black hole formation. Giacomazzo and his collaborators reported new magnetized binary neutron star merger simulations by employing a sub-grid model <cit.>. The pre-merger magnetic field is ∼ 2.5× 10^12  G. The grid resolutions are ≈ 180  m, 220  m, 300  m, and 360  m. The neutron star is modeled with the Γ-law equation of state with Γ=2. They implemented a sub-grid model by adding a new term E_ subgrid in the induction equation: ∂_t A = - E_ ideal - E_ subgrid, where A is the vector potential and E_ ideal is the electric field in the ideal MHD. By assuming the closure relation between E_ subgrid and A, E_ subgrid = - S_ subgrid A, where S_ subgrid is a parameter, they reproduced an exponential amplification at the merger. To mimic the Kelvin-Helmholtz instability, i.e., the energy transfer from the turbulent kinetic energy in the shear layer to the magnetic field energy, and to suppress unphysical behavior near the stellar surface, they introduced four parameters in S_ subgrid. Two of them are proportional to the fraction of the fluid vorticity ∇×v, which are calibrated by high-resolution special relativistic MHD local box simulation for MHD turbulence <cit.>. They performed the convergence study and concluded that the saturation of the magnetic field is ∼ 10^51  erg. However, it is still an open question that the saturation is physical because it is controlled by S_ subgrid, which is calibrated only by special relativistic MHD. Furthermore, their sub-grid model violates energy conservation. Also, their closure relation needs to be justified. Dionysopoulou and her collaborators reported resistive MHD simulation of binary neutron star mergers for the first time <cit.>. They proposed a simplified model of the electrical conductivity σ_ con: [Original notation for the electrical conductivity is σ, but to avoid confusion with the magnetization parameter σ, we use a different notation in this review.] σ_ con=σ_0 max[1-2/1+exp[2D_ tol(D-D_ rel)/D_ atm],0], where D=ρ w is a conserved mass density. D_ tol, D_ rel, and D_ atm are chosen such that the electric conductivity has a smooth transition from high conductivity, i.e., ideal MHD, in a high-density region to zero conductivity in a low-density region. Specifically, they chosed D_ tol=0.01, D_ rel=100ρ_ atm, D_ atm=ρ_ atm, and σ_0=2× 10^11  s^-1. ρ_ atm is a tenuous atmosphere outside the neutron star and set to be 6.17× 10^6  g cm^-3. The pre-merger magnetic field is 1.97× 10^12  G. The employed grid resolution is Δ x =148  m. The neutron star is modeled with the Γ-law equation of state with Γ=2. Their main finding is that the magnetic winding inside the remnant massive neutron star for the resistive MHD case becomes inefficient compared to the ideal MHD case because the magnetic field is not perfectly frozen into the fluid elements. As a result, the lifetime of the remnant massive neutron star becomes ∼ 2  ms longer in the resistive MHD case, which results in more massive torus formation after the remnant collapses into a black hole. However, these resistive effects need to be more clarified because the magnetic braking timescale with ∼ 10^12  G is much longer than the lifetime of their particular model. Palenzuela and his collaborators performed magnetized binary neutron star merger simulations incorporating neutrino physics for the first time <cit.>. They implemented the finite temperature nuclear density equation of states, the SFHo <cit.>, DD2 <cit.>, and NL3 <cit.>, and the neutrino leakage to take into account the neutrino cooling <cit.>. The former results in a short-lived merger remnant, and the latter two do in a long-lived merger remnant. They also employed a sub-grid model similar to Eqs. (<ref>)–(<ref>) to capture the unresolved Kelvin-Helmholtz instability. The pre-merger magnetic field strength is 10^13  G, and the grid resolution is Δ x=230  m.Their findings are summarized as follows: (i) The magnetic field is exponentially amplified up to the saturation energy of ∼ 10^50  erg. (ii) The post-merger evolution of the remnant massive neutron star is not sensitive to the amplified magnetic field, at least within the timescale of ≈ 10  ms after the merger. (iii) The angular momentum transport seems to be facilitated by the magnetic braking due to the amplified magnetic field. However, there are a couple of caveats. The first one is the sub-grid term breaks the divergence-free condition since their MHD implementation is based on “B," not on “A." The second one is the saturation energy of the magnetic field is very sensitive to the sub-grid implementation since the saturation energy is one order magnitude smaller than that reported in Ref. <cit.>. The third point is the role of the MRI turbulence for the angular momentum transport needs to be more clarified since they did not analyze the MRI. Kiuchi and his collaborators reported new magnetized binary neutron star merger simulations <cit.>. The pre-merger magnetic fields are 10^13  G, 10^14  G, and 10^15  G. The employed grid resolutions are Δ x=17.5  m, 27.5  m, and 37.5  m for the convergence test. It should be noted that the high-resolution mesh refinement domains are assigned only to a region where the shear layer appears. The entire neutron stars in the pre-merger phase were covered by 70  m, 110  m, and 150  m, respectively. The NS model is the same as that in Ref. <cit.>. They measured the growth rate of the magnetic field amplification due to the Kelvin-Helmholtz instability, and it is approximately proportional to the inverse of the employed grid resolution (see Fig <ref>). They also explored the saturation energy of the magnetic field by varying the pre-merger magnetic field. The back reaction due to the amplified magnetic field is likely to start once the magnetic-field energy reaches ∼ 10^49  erg, and it is likely to saturate at ∼ 10^50  erg (see Fig <ref>). However, the energy spectrum analysis indicated that only a small fraction of ∼ 0.5–1 % of the turbulent kinetic energy is transferred to the magnetic field energy. Also, there is no prominent sign of the angular momentum transport at ≈ 5  ms after the merger because, for efficient magnetic braking, a large-scale poloidal magnetic field is necessary. It implies that (i) the physical saturation of the magnetic field energy could still be far from that reported in the simulations, and (ii) the Kelvin-Helmholtz instability only produces the small-scale field. The MRI analysis was missed in their work because of the shortness of the simulation, i.e., 5  ms after the merger. Ruiz and his collaborators performed magnetized binary neutron star merger simulations <cit.>. They employed Δ x=152  m and 227  m. The neutron star is modeled by the Γ-law equation of state with Γ=2. The merger remnant is short-lived. The pre-merger magnetic field strength was unclear because they only mentioned the field strength at pole B_pole=1.75× 10^15  G. However, from Fig. 5 in Ref. <cit.> reported by the same authors with a similar set-up except for the neutron star spin, the pre-merger magnetic field is guessed to be ≈ 10^16.5–10^17  G (see also Fig. <ref>). In addition to the confined pre-merger magnetic field configuration, they employed a pulsar-like configuration: A_ϕ=π r_0^2 I_0 ϖ^2/(r_0^2+r^2)^3/2(1+15r_0^2(r_0^2+ϖ^2)/8(r_0^2+r^2)^2), where r_0 is the current loop radius and I_0 is the loop current, r^2=(x-x_ NS)^2+(y-y_ NS)^2+z^2, and ϖ^2=(x-x_ NS)^2+(y-y_ NS)^2 where x_ NS and y_ NS denote the initial coordinates of the neutron star center of mass. Importantly, the neutron star exterior condition for the hydrodynamic variables is chosen such that the exterior plasma beta is 0.01. They found a mildly relativistic outflow, an incipient jet, from the merger remnant composed of the rapidly spinning black hole and massive torus. They reported that this is the case irrespective of the pre-merger magnetic field geometry. Since they mentioned the MRI quality factor in the remnant massive neutron star is greater than 10, Eq. (<ref>) and their grid-set up suggested B_p ≳ 3× 10^16  G. The previous works showed that it is impossible to reproduce the efficient Kelvin-Helmholtz instability with this resolution <cit.> unless the sub-grid model is employed. Therefore, the poloidal magnetic field inside the merger remnant could be a consequence of the pre-merger large-scale magnetic field. After the remnant neutron star collapsed into the black hole, the pole region evacuated, and the incipient jet was launched at ≈ 40  ms after the black hole formation. They argued the discrepancy from Ref. <cit.>, i.e., the absence/presence of the jet launching, may originate from the difference of the neutron star model. When the two neutron stars collide, the dynamical ejecta is driven by the shock heating and tidal force <cit.>. A part of the dynamical ejecta falls back onto the merger remnant, and the resultant ram pressure may overcome the magnetic pressure. However, the fallback timescale may be different in different binary neutron star models. Namely, the fallback matter in their model is smaller than that in Ref. <cit.>. In the latter model, it may take time to launch a relativistic jet. A quantitative comparison on the fallback timescale should be necessary among the different binary neutron star merger models. Also, the large-scale magnetic field generation mechanism should be investigated in detail. The final caveat is the magnetic winding timescale originating from the pre-merger large-scale field is as short as O(1)  ms (see Eq. (<ref>)), which is shorter than the lifetime of the remnant massive neutron star of O(10)  ms. Endrizzi and his collaborators reported a new series of magnetized binary neutron star merger simulations <cit.>. The pre-merger magnetic field is 10^13  G. The employed grid resolution is Δ x=222  m. The neutron star is modeled with the APR equation of state <cit.>. They employed a hybrid equation of state to take into account the thermal effect during the simulation. They simulated three cases: a high mass case, which results in a prompt black hole formation, and a symmetric and asymmetric low mass case, which results in a supramassive neutron star formation [The supramassive neutron star is defined as the mass below the uniformly rotating neutron star's maximum mass and above the TOV star's maximum mass.]. They reported an exponential amplification of the magnetic field energy after the merger in the two later cases (see Fig. 9 in Ref. <cit.>.) However, the density-weighted average of the magnetic field strength did not exhibit such an exponential growth (see Fig. 11 in Ref. <cit.>.) Also, the saturation energy of the magnetic field of 10^48–10^49  erg is much smaller than the rotational kinetic energy of ∼ 10^53  erg. As they reported, the Kelvin-Helmholtz instability and MRI are not resolved. Therefore, the detailed analysis is still necessary. Kawamura and his collaborators reported a systematic study of magnetized binary neutron star merger simulations <cit.>. The grid resolution is Δ x=222  m for the Γ-law equation of state with Γ=2 and Δ x =186  m for the H4 equation of state <cit.>. The pre-merger magnetic field is ∼ 10^12  G and explored the pre-merger magnetic field topology with the field aligned to the orbital rotation axis, aligned and anti-aligned field, and both anti-aligned field. They also performed the convergence test by changing the grid resolution Δ x=177  m and 277  m for the Γ-law equation of state and Δ x =150  m for the H4 equation of state. Their results with the standard grid set-up are summarized as follows: (i) For the Γ-law equation of state, the symmetric binary results in a short-lived remnant massive neutron star formation and subsequent massive torus formation after the remnant collapses into a black hole. The Kelvin-Helmholtz instability is inefficient, and the MRI cannot be resolved during the remnant massive neutron star phase due to their set-up for the pre-merger magnetic field and grid (see Eq. (<ref>)). Inside the massive torus, the magnetic field is amplified, but its density-weighted mean value saturated at ≈ 10^12  G. λ_ MRI in this phase is likely to be λ_ MRI = B_ mean/√(4πρ) 2π(GM_ BH/R^3_ disk)^-1/2≈ 10^3  cm B_ mean,12ρ_12^-1/2 M_ BH,3^-1/2R_ disk,6.7^3/2, where B_ mean,12=B_ mean/10^12  G, ρ_12=ρ/10^12  g cm^-3, R_ disk,6.7=R_ disk/5×10^6  cm, M_ BH,3=M_ BH/3M_⊙, and we assumed the Newtonian Keplarian angular velocity with the disk radius R_ disk. Therefore, the magnetic field amplification inside the torus is likely due to magnetic winding. However, the saturation energy of the magnetic field of ∼ 10^44  erg is far from the equipartition to the rotational kinetic energy of ∼ 10^50  erg. Also, they reported the magnetic field amplification in each phase is very sensitive to the grid resolution. For example, the magnetic field energy differs by six orders of magnitude at the end of the simulation with high- and low-resolution runs. (ii) For the asymmetric binary case with the Γ-law equation of state, the merger remnant almost promptly collapses into a black hole, and a torus more massive than that in the symmetric binary is formed. However, the magnetic field amplification inside the massive torus is minor, i.e., only a factor of few. It is counterintuitive because the rotational energy should be tapped into the amplification of the magnetic field energy via the MRI and magnetic winding, i.e., the more massive torus is expected to have stronger magnetization. (iii) For the symmetric binary with the H4 equation of state, the magnetic field evolution until the black hole formation is qualitatively similar to the symmetric binary with the Γ-law equation of state. However, the magnetic field is not amplified inside the massive torus and stays at ∼ 10^44  erg until the end of the simulation. It is a striking difference between their result and Ref. <cit.> reporting the saturation energy is likely to be ∼ 10^49  erg (see also Fig. <ref>) with the same binary, but different pre-merger magnetic field and grid set-up. They performed the convergence study with Δ x=150  m and reported a significant growth of the magnetic field during the massive torus phase, presumably due to the non-axisymmetric MRI, which is not observed in their standard resolution. In the high-resolution run, the magnetic field energy is amplified up to ∼ 10^50  erg inside the torus at the end of the simulation. (iv) For the asymmetric binary with the H4 equation of state, they reported an exponential growth of the magnetic field during a remnant massive neutron star phase. However, given the density-weighted mean value of the magnetic field of ∼ 10^12–10^13  G (see Fig. 13 in Ref. <cit.>), the MRI is likely to be unresolved (<ref>). As with Ref. <cit.>, the origin of the exponential growth is a riddle. It should be noted that the MRI quality factor estimated in Ref. <cit.> and this work is not based on the poloidal field strength, i.e., B_ p in Eq. (<ref>), but the total magnetic field strength, implying the most likely to be the toroidal field strength. In other words, they explored the non-axisymmetric MRI, not the axisymmetric MRI <cit.>. Since the axisymmetric MRI has a more significant growth rate than the non-axisymmetric MRI <cit.>, and the poloidal field is generally weaker than the toroidal field, resolving the axisymmetric MRI is more challenging. Ciolfi and his collaborators reported new magnetized binary neutron star merger simulations, which leave a long-lived remnant <cit.>. They employed the grid resolution of Δ x=222  m and 177  m for the convergence test. The neutron star is modeled with the APR equation of state <cit.>, the MS1 equation of state <cit.>, and the H4 equation of state <cit.>. They explored the symmetric and asymmetric binaries. The pre-merger magnetic field is ≈ 3× 10^15  G. They reported that after an inefficient Kelvin-Helmholtz instability amplification, the MRI is likely to play an active role in their models. However, there are a couple of caveats: (i) Since λ_ MRI is not estimated by the poloidal field strength but by the presumably dominant toroidal field strength, it seems to be overestimated as they mentioned by themselves. (ii) Since they have not quantified the MRI quality factor with a number, it is hard to judge the ability of their simulation setup to resolve the MRI. As shown in independent simulations, it seems to be hard to resolve the axisymmetric MRI with their setup <cit.> (see below). Ruiz and his collaborators reported magnetized binary neutron star mergers in a prompt black hole formation scenario in Ref. <cit.>. The grid resolution is Δ x≈ 140–150  m. The pre-merger magnetic field is the same as in their previous work <cit.>, i.e., likely to be 10^16.5–10^17  G (see also their Table I which shows the pre-merger magnetic field energy is as high as 10^50  erg). In additon to the confined pre-merger mangetic field configuration, they explored the dipole-like field configuration (see Eq. (<ref>)). They confirmed the MRI is resolved inside the massive torus around a black hole and estimated the Shakura-Sunyaev parameter <cit.> for the first time. However, contrary to the remnant massive neutron star formation case <cit.>, they did not find a launching of an incipient jet irrespective of the mass asymmetry of the binary neutron star. Since the shocked component of the dynamical ejecta and resultant fallback matter in the polar region should be negligible in the prompt black hole formation, the conclusion is puzzling compared to their previous work, i.e., the less fallback matter is favorable for the launching of the incipient jet. The absence of a large-scale magnetic field inside the disk needs to be clarified. Kiuchi and his collaborators reported new magnetized neutron star merger simulations. The employed grid resolution is Δ x =12.5  m <cit.>. Similar to their previous work <cit.>, the high-resolution mesh refinement domains are assigned to a shear layer, and the entire binary neutron star before the merger is covered by Δ x=50  m. The pre-merger magnetic field is 10^15  G. The neutron star is modeled with the H4 equation of state <cit.>. They also performed the convergence study with Δ x=70  m and 110  m. They explored the less massive binary neutron star merger compared to that in Refs. <cit.>, resulting in the long-lived massive neutron star. They thoroughly assessed the MRI-driven turbulence in a non-linear phase with the axisymmetric, the non-axisymmetric MRI quality factor, the ratio of the poloidal field energy to the toroidal field energy, the Maxwell stress, and associated Shakura-Sunyaev parameters <cit.>. Their findings are summarized as follows: (i) The MRI quality factor, as well as the other diagnostics to quantify the ability of the simulation to sustain the MRI-driven turbulence, indicates that the resolution coarser than Δ x=70  m is insufficient in the region of ρ≥ 10^13  g cm^-3. Particularly, with Δ x =110  m, which is a “high" resolution in the numerical relativity community, the MRI-driven turbulence cannot be sustained in the region ρ≥ 10^12  g cm^-3 (see Table I in Ref. <cit.>). It should be noted that a bulk of the rotational energy of the remnant massive neutron star is contained in such a high-density region. Unresolved MRI simulation could draw a physically incorrect picture unless the MRI is suppressed by a mechanism such as neutrino viscosity and drag. (ii) An ordered (large-scale) poloidal magnetic field is not established within their simulation timescale of ≈ 30  ms after the merger. It indicates the magnetic braking timescale could be ≈ 0.7–0.8  s unless the large-scale poloidal field is not established by a mechanism such as αΩ dynamo. The absence of a large-scale field needs to be clarified. Ruiz and his collaborators placed an upper limit of the maximum mass of the TOV star by combining their simulation results <cit.> and GW170817 observation associated with GRB170817A <cit.>. In addition to the results in Refs. <cit.>, they performed a simulation for a supramassive neutron star formation case, i.e., the lifetime of the merger remnant could be longer than the magnetic dipole radiation timescale. They concluded that the supramassive neutron star can not launch the jet. Based on these results, their scenario is that the merger remnant in GW170817 is between the supramassive limit and the prompt collapse limit. Since we know the mass of the binary neutron star in GW170817, this scenario gives M^ sup_ max≈β M^ TOV_ max≤ M_GW170817≈ 2.74M_⊙≤α M^ TOV_ max, where α=1.3–1.7 determines the threshold mass for the prompt black hole formation <cit.>, and M^ TOV_ max is the maximum mass of the TOV star, and M^ sup_ max is the maximum mass of the uniformly rotating star. A Rhoades-Ruffini causality argument on the equation of state gives β≈ 1.27 <cit.>. Equilibrium configurations of uniformly rotating neutron stars with various kinds of equations of states give β≈ 1.2 <cit.>. Therefore, the upper limit of the maximum mass of the TOV star is likely to be ≈ 2.16–2.28M_⊙. The caveat is that their upper limit estimation of the TOV maximum mass is based on the hypothesis that the supramassive neutron star cannot launch a relativistic jet. Ruiz and his collaborators explored a neutron star spin effect in magnetized binary neutron star mergers <cit.>. The grid resolution is Δ x =220  m. The neutron star is modeled with Γ-law equation of state with Γ=2. The pre-merger magnetic field is likely to be 10^16.5–10^17  G with pulsar-like dipole magnetic field topology (see Eq. (<ref>)). Non-dimensional neutron star spin χ_ NS is -0.05, 0.24, and 0.36 [Minus sign denotes an anti-align spin with respect to the orbital angular momentum.]. It should be noted that the fastest spin observed in the binary pulsars is χ_ NS∼ 0.05 <cit.>. They simulated both unmagnetized and magnetized binaries. Figure <ref> shows the magnetic field energy, and it shows the pre-merger magnetic field energy of ∼ 2–3× 10^50  erg is amplified by a factor of ≈ 10–15 at ≈ 2  ms after the merger. Subsequently, the magnetic field energy continues to be amplified to ∼ 10^52  erg until the black hole formation. They analyzed the MRI quality factor and Shakura-Sunyaev parameter. They concluded that the magneto-turbulent state is established and, consequently, the angular momentum transport is facilitated. Figure <ref> shows the gravitational waves in all the models, which summarizes the unmagnetized and magnetized binary evolution. For the aligned high spinning cases with χ_ NS=0.36 and 0.24, on the one hand, the unmagnetized binaries do not collapse into a black hole until the end of the simulation. On the other hand, the magnetized counterparts do. They argued that it indicates the MHD effect facilitates the angular momentum transport. For the non-spinning case with χ_ NS=0 or the anti-aligned low spinning case with χ_ NS=-0.05, the magnetized binaries survive longer than the unmagnetized counterparts. Their interpretation for this is due to the efficient dissipation of the kinetic energy through the magneto-turbulence; the enhanced thermal pressure supports the remnant massive neutron stars. Also, they reported the incipient jet launching after the black hole formation in the magnetized binaries irrespective of the neutron star spin. However, there are a couple of caveats: (i) With their grid set up and the figure showing Q_ MRI≥ 10 (see Fig. 9 in Ref. <cit.>), the poloidal magnetic field strength inside the remnant is estimated to be ≳ 10^16.3  G, which is comparable to the pre-merger magnetic field strength. The resultant winding timescale is O(1) ms (see Eq. (<ref>)). Therefore, the MHD effect initiated from the pre-merger field is not negligible. Also, the large-scale field generation mechanism from the small-scale MRI-amplified field needs to be clarified more. (ii) The estimated MRI-turbulent viscous timescale scale of ∼ 10  ms (see Eq. (4) in Ref. <cit.>) is ten times shorter than that in Ref. <cit.> (see the second and third column in Table I in Ref. <cit.>). Both works estimated the Shakura-Sunyaev parameter as ∼ 10^-2. Strictly speaking, the binary neutron star model is different. However, it is unlikely to make an order-of-magnitude difference in the viscous timescale. The discrepancy needs to be clarified more. (iii) Their argument on the efficient dissipation of the kinetic energy through the MRI-driven turbulence in χ_ NS=0 and -0.05 models compared to χ_ NS=0.36 and 0.24 models may not be supported by their estimated Shakura-Sunyaev parameters because it does not significantly differ among the models (see Table III in Ref. <cit.>). Note that the viscous heating is proportional to the viscous parameter. Ciolfi and his collaborators reported for the first time a 100  ms simulation for long-lived magnetized binary neutron star merger remnant <cit.>. The pre-merger magnetic field is 10^16  G. The grid resolution is Δ x=220  m. The neutron star is modeled with the APR equation of state <cit.>. They claimed that after an inefficient Kelvin-Helmholtz instability amplification, the MRI starts to be resolved, and the magnetic field energy is amplified up to ∼ 10^51  erg. Also, they reported there is no sign of the jet launching at the end of the simulation of ≈ 100  ms, and they concluded the magnetar scenario for the short gamma-ray bursts is unlikely, or it could launch a jet much later than 100  ms. However, there are a couple of caveats: (i) As in Ref. <cit.>, the magnetic winding initiated from the pre-merger magnetic field is not negligible. (ii) With their grid setup and pre-merger magnetic field, it is hard to resolve the MRI, as they mentioned. Particularly, the increase of the MRI quality factor is likely caused by the magnetic winding, i.e., the MRI quality factor is for the toroidal field as they did previously in Refs. <cit.>. Also, Ref. <cit.> suggested the axisymmetric MRI in the high-density region with ρ≥ 10^13  g cm^-3 can not be resolved with their setup (see Table I in Ref. <cit.> with the factor of five boosts, i.e., the pre-merger magnetic field ten times larger and the grid resolution twice coarser compared to the run with Δ x = 110  m). (iii) The angular momentum transport due to the magnetic field needs to be clarified more as they mentioned because the angular velocity evolution, particularly in the core of the merger remnant, is similar to the unmagnetized case. Ruiz and his collaborators explored the magnetic-field topology in binary neutron star mergers <cit.>. The pre-merger magnetic field is an aligned dipole-like configuration to the orbital angular momentum and a perpendicular dipole-like configuration with a field strength of ≈ 10^16.5–10^17  G (see Eq. (<ref>)). The grid resolution is Δ x =220  m. The neutron star is modeled with the Γ-law equation of state with Γ=2. The neutron star has a spin of χ_ NS=0.36. Compared to the aligned–aligned pre-merger magnetic configuration in Ref. <cit.>, the aligned-perpendicular pre-merger magnetic configuration results in a longer lifetime of the remnant massive neutron star. They pointed out that it is caused by an inefficient angular momentum transport due to the MRI-driven turbulence because the Shakura-Sunyaev parameter is smaller than that in the aligned-aligned case. After the black hole formation, the incipient jet is launched as in the aligned-aligned case because the region with the magnetization parameter greater than 10 appears above the black hole. For the perpendicular-perpendicular case, the lifetime of the remnant massive neutron star is longer than the aligned-aligned or aligned-perpendicular case due to the inefficient angular momentum transport. Consequently, the torus mass formed after the remnant massive collapses into a black hole is larger than that in the other cases. Also, the magnetic field energy at the collapse into the black hole is ten times smaller than the aligned-perpendicular case. They mentioned that the MRI quality factor is below 6 in the perpendicular-perpendicular case, and the resultant turbulent viscosity quantified by the Shakura-Sunyaev parameter is small. They ran a higher resolution simulation, but the result is likely to be less conclusive because improving the grid resolution by only 25% is not enough to capture the MRI. After the black hole formation, they did not find an incipient jet launch in the perpendicular-perpendicular model, even though it has a larger torus mass compared to the aligned-aligned case and aligned-perpendicular case. They concluded that it is consistent with the prompt collapse case, which suggests the magnetic field energy should be more significant than 0.1% of the initial ADM mass at the formation of the black hole to launch the incipient jet. However, in addition to the caveats raised above, there is a caveat to explore the pre-merger magnetic field topology. As demonstrated in this work, it is more challenging to reproduce the MRI-driven turbulence in the perpendicular-perpendicular case, which is likely to be natural since the aligned case is preferable to resolve the MRI, i.e., λ_ MRI∝ B_z. It implies that it may not be a fair comparison for the different pre-merger magnetic field topologies with the same grid set up because the inefficient development of the MRI-driven turbulence may be merely a consequence of the numerics. Ciolfi reported a long-term magnetized binary neutron star merger simulation up to 250  ms after the merger for the first time <cit.>. The pre-merger magnetic field is 10^15  G and 5× 10^15  G. The employed grid resolution is Δ x=250  m. The neutron star is modeled by the APR equation of state <cit.>. His finding is summarized as follows: (i) The magnetically collimated outflow appears in the strongly magnetized case but not in the weakly magnetized case. (ii) The outflow is mildly relativistic with ≲ 0.3  c and the kinetic energy of the outflow within an opening angle of 15^∘ is ≈ 3× 10^49  erg. Therefore, except for a very optimistic scenario, the system is not likely to drive a relativistic jet compatible with GRB 170817A <cit.>. There are a couple of caveats: (i) The subtleness of the pre-merger magnetic field needs to be clarified more, particularly by conducting a detailed analysis of the MRI. (ii) The large-scale magnetic field generation inside the merger remnant must also be clarified more. Since the simulation set-up cannot resolve the MRI, the magnetic winding due to the pre-merger magnetic field relic could be the outflow generation mechanism. Mösta and his collaborators performed magnetized binary neutron star remnant simulations with neutrino physics <cit.>. First, they performed an unmagnetized binary neutron star merger simulation. Then, they embeded a dipole-like seed poloidal magnetic field of 10^15  G inside the remnant massive neutron star, which is short-lived. They took into account the neutrino cooling by the leakage scheme and the neutrino heating phenomenologically. They performed an in-depth convergence test with Δ x=55  m, 110  m, and 220  m. The neutron star is modeled with LS220 <cit.>. They also explored the binary merger remnant evolution without neutrino. Their findings are summarized as follows: (i) The highest resolution is fine enough to capture the MRI, and the toroidal magnetic field amplification is significantly enhanced compared to the low-resolution run. The enhanced toroidal field drives an MHD wind with faster velocity than the non-magnetized case, which is the neutrino-driven wind. (ii) The neutrino cooling helps mitigate the polar region's baryon pollution and aids the launch of the jet with the terminal Lorentz factor of ∼ 5. (iii) The mass ejection by the MHD effect increases with the grid resolution by a factor of three. As they mentioned, the caveat is that they assumed a dynamo activity to build up the large-scale poloidal field inside the remnant massive neutron star. Aguilera-Miret and his collaborators performed magnetized binary neutron star merger simulations with a sub-grid model <cit.>. Their sub-grid model is based on the gradient sub-grid scale model in which filtering with a Gaussian kernel is applied to each physical quantity <cit.>. This prescription gives additional terms in the physical flux of the MHD equation: τ^i_N=- C_N ξ H^i_N (the flux in the continuity equation), τ^ij_T=-C_T ξ H^ij_T (the flux in the momentum equation), τ^i_M= - C_M ξ H^i_M (fhe flux in the induction equation), where ξ=γ^1/3Δ x^2/24. C_N, C_T, and C_M are called pre-factors, which a direct simulation should calibrate. The value of the pre-factors with C_N=C_T=C_M=1 follows from the analytical calculation of large eddy simulation with the gradient sub-grid scale model. A priori tests indicate that practically, they might slightly vary with the grid resolution and other parameters, but they usually remain between 1–2 <cit.>. However, the numerical dissipation inherent to the employed Riemann solver with the high-order cell reconstruction at the small scale strongly attenuates the effect of the large eddy simulation at intermediate resolutions. Practically, the authors balanced this attenuation by artificially increasing the pre-factors. H^i_N, H^ij_T, and H^i_M are functions of the filtered fields, and the cumbersome expressions are found in Ref. <cit.>. The pre-merger magnetic field is 5× 10^11  G, which is approximately consistent with the highly magnetized end of the binary pulsars <cit.>. They employed three grid resolutions of Δ x=37  m, 74  m, and 147  m in direct simulations, i.e., without the sub-grid terms. Then, they compared them with simulations with their sub-grid model while keeping the grid resolution Δ x=147  m and changing the value of C_N, C_T, and C_M. The neutron star is modeled with the SLy equation of state <cit.>. Figure <ref> summarizes their results. In the direct simulation with Δ x=37  m, the magnetic-field energy is amplified from ∼ 10^44  erg up to ∼ 10^50  erg due to the Kelvin-Helmholtz instability within ≈ 10  ms after the merger. The growth rate of the amplification is determined by the employed grid resolution, which is consistent with the Kelvin-Helmholtz instability property. They also compared the energy spectrum for the turbulent kinetic energy and magnetic field energy (see Fig. <ref>). They concluded that the sub-grid model with (C_M,C_N,C_T)=(8,0,0) can capture more efficient magnetic field amplification than the direct simulation counterpart until 5  ms after the merger. This sub-grid setup is also able to mimic the magnetic-field energy amplification of the direct simulation with Δ x=74  m at 10  ms after the merger. Also, they reported how sensitive the remnant massive neutron star evolution is to the choice of the pre-factor. If they choose (C_M,C_N,C_T)=(8,1,1), (8,2,2), (8,4,4), or (8,8,8), the Kelvin-Helmholtz amplification is less efficient than (C_M,C_N,C_T)=(8,0,0). More importantly, it triggers the collapse of a black hole, which is not observed in the direct simulations. They concluded that the choice of the non-zero value of C_N and C_T, particularly the latter, facilitates the angular momentum transport too efficiently, which results in a spuriously early black hole formation. Besides the subtleness of the choice of the pre-factor, there are a couple of caveats: (i) it is not clear which term(s) in H^i_M in Eq. (<ref>) leads the efficient amplification during the Kelvin-Helmholtz instability phase. (ii) The magnetic field energy shown in Fig. <ref> may not be consistent with their pre-merger magnetic field since it should be ∼ 10^40  erg B_11.7^2 R_6^3 where B_11.7=B/10^11.7 G≈ B/(5× 10^11 G) and R_6=R/10^6 cm. (iii) The scale of the energy spectrum in Fig. <ref> may not be consistent with their grid setup. The highest wavenumber of k≈ 10^-3  cm^-1 is the spatial scale of 60  m, but their employed grid resolution is Δ x=147  m. Ruiz and his collaborators reported new magnetized binary neutron star merger simulations <cit.>. The pre-merger magnetic field is the pulsar-like dipole field and confined dipole field with presumably ≈ 10^16.5–10^17  G. The employed grid resolution is Δ x=90  m–110  m. The neutron star is modeled with the SLy equation of sate <cit.> and H4 equation of state <cit.>. They explored the prompt black hole formation case and delayed collapse case. For the latter, the collapse time is ≈ 10–40  ms after the merger. Their findings are qualitatively the same as their series works <cit.>. An incipient jet was not found for the prompt collapse case, irrespective of the model. For the delayed collapse case, the magnetic field energy is amplified to ∼ 10^51  erg during the Kelvin-Helmholtz instability phase. In some models, it is furthermore amplified up to ∼ 10^52  erg until the black hole formation. The amplification highly depends on the magnetic field topology. Except for one model (H4M2.8I in their terminology), which is the same as in Ref. <cit.> other than the pre-merger magnetic field strength and grid setup, they found a launching incipient jet. Besides the caveats mentioned above, one notable difference from the simulations in Ref. <cit.>, in which a much weaker pre-merger magnetic field is assumed, is the saturation energy of the magnetic field ∼ 10^50  erg at the end of the Kelvin-Helmholtz instability phase. Palenzuela and his collaborators reported new magnetized binary neutron star simulations with the gradient sub-grid scale model <cit.>. The pre-merger magnetic field is 5× 10^11  G. They performed an in-depth resolution study with Δ x=30  m, 60  m, and 120  m with and without the sub-grid model. The neutron star is modeled with the APR equation of state <cit.>. The pre-factor is chosen (C_M,C_N,C_T)=(8,0,0). Their findings are summarized as follows: (i) With the sub-grid model, the highest and middle-resolution runs indicate the convergence to B̅_ tor≈ 4–5× 10^15  G and B̅_ pol≈ 5–6× 10^15  G at the end of the Kelvin-Helmholtz instability phase in the core region defined by ρ≥ 10^13  g cm^-3. It is likely to be consistent with the result in the highest direct simulation. Such a trend is also observed as B̅_ tor≈B̅_ pol≈ 4–5× 10^14  G in the envelope region defined by 10^10  g cm^-3≤ρ≤ 10^13  g cm^-3. (ii) The spectrum analysis suggests the kinetic turbulent energy spectrum follows the Kolmogorov power law with ∝ k^-5/3 and the magnetic energy spectrum follows the Kazantsev power law with ∝ k^3/2. The inverse cascade is likely to occur, which results in the increase of the coherent length scale of the magnetic field from ≈ 0.7  km to 2  km at the end of the simulation of 50  ms after the merger. (iii) During the first 50  ms at least, the efficient angular momentum redistribution is not likely to be facilitated, implying the MRI is not operating because there is no static, large-scale background field over which we can define an unstable perturbation of the MRI. However, there are a couple of caveats: (i) They employed a cut-off density of 6× 10^13  g cm^-3 below which the sub-grid term turns off. This cut-off density seems to be significant, and the effect seems not to be negligible for the evolution of both the remnant core and envelope. (ii) Since the highest direct simulation is likely to agree with the converged sub-grid simulation, it implies that the direct simulation looks like entering the convergent regime. It should be confirmed in a higher-resolution direct simulation. Otherwise, we cannot conclude that the saturation magnetic field energy is physical or a consequence of the sub-grid prescription. (iii) The role of the MRI should be explored in a more extended time-scale simulation because their simulation suggests the onset of the magnetic winding. Particularly, several diagnostics to explore the MRI in the non-linear phase proposed in Ref. <cit.> should be investigated (see also Ref. <cit.>). (iv) Since the high resolution of Δ x=60  m still needed for the convergence in the simulation with the sub-grid model, there might be a “double" counting for the turbulence, i.e., one developed by the direct simulation and the other by the sub-grid model which mimics the direct simulation. It is necessary to quantify how such an artifact would affect the convergent property. Aguilera-Miret and his collaborators reported new magnetized binary neutron star merger simulations with the gradient sub-grid scale model <cit.>. The pre-merger magnetic field is 10^12  G with an aligned dipole, misaligned dipole, and multipole topology. They also explored a strong magnetic field case with 10^15  G. The employed grid resolution is Δ x =60  m. The neutron star is modeled by the APR equation of state <cit.>. The pre-factor is chosen (C_M,C_N,C_T)=(8,0,0). Their findings are summarized as follows: (i) At the end of the Kelvin-Helmholtz instability, the averaged magnetic field strength in the core region converges within a factor of three for the toroidal component and two for the poloidal component irrespective of the pre-merger field topology and field strength. Consequently, the magnetic field energy saturates within a factor of three, and the strong pre-merger magnetic field results in a similar saturation level. (ii) The evolution of the energy spectrum is insensitive to the pre-merger magnetic field topology. The coherent length of the magnetic field evolves from ∼ 0.8  km to 2  km at the end of the simulation of 30  ms after the merger. Therefore, the pre-merger field topology memory is likely to be lost during the merger, implying the universality of their result. A caveat is that they only consider the purely poloidal magnetic field, implying zero magnetic helicity. However, the magnetic field configuration after the merger remnant settles down to the quasi-equilibrium state could depend on the net magnetic field helicity because the magnetic helicity conserved in the ideal MHD framework links to the field topology <cit.>. Sun and his collaborators reported new magnetized binary neutron star merger simulations with neutrino physics <cit.>. The pre-merger magnetic field is a dipole-like field with presumably ≈ 10^16.5–10^17  G (see Eq. (<ref>)). The employed grid resolution is Δ x=110  m. The neutron star is modeled with the SLy equation of states <cit.>. During the simulation, they employed a piecewise polytrope equation of state for the cold part combined with the analytic expression of the thermal part. They also employ the M1 scheme for the neutrino radiation field. Their finding is qualitatively similar to what they did in the past <cit.>: an incipient jet launching for the delayed collapse case irrespective of the neutrino effect. They estimated the neutrino viscous and drag effect on the MRI for the first time in the magnetized binary neutron star merger simulation and reported the effect is not significant. They also reported that neutrino cooling mitigates the baryon-loading in the funnel region above the black hole as reported in Ref. <cit.>. The angular momentum loss due to the neutrino emission is minor during their simulation time, ≈ 15  ms after the black hole formation. The caveat, except for those in their series works, is that the luminosity for the heavy-lepton neutrino suddenly increases up to ∼ 10^53  erg s^-1 at ≈ 5  ms after the black hole formation, which seems not to be consistent with the other neutrino radiation transfer simulation of binary neutron star mergers <cit.>. Palenzuela and his collaborators reported magnetized binary neutron star merger simulations with neutrino physics and the sub-grid model <cit.>. They implemented a finite temperature nuclear theory-based equation of states HS <cit.> and LS220 <cit.> and neutrino leakage scheme <cit.>. The pre-merger magnetic field is 10^11  G. They employed a grid resolution of Δ x=187  m. The pre-factor in the sub-grid model is set to be (C_M,C_N,C_T)=(0.5,0,0). They concluded that although they found an efficient amplification of the magnetic field from 10^11  G to ∼ 10^14  G for ∼ 8  ms after the merger in the sub-grid model simulation the magnetic field hardly alters the neutrino emission such as the neutrino luminosity. The caveat is that their result on the neutrino emission is not likely conclusive because (i) the previous work showed that at least the grid resolution of Δ x=60  m is necessary to obtain the saturated magnetic field of ∼ 10^16  G in the sub-grid model run with (C_M,C_N,C_T)=(8,0,0) <cit.> and (ii) the simulation timescale is not long enough to explore the MHD effect on the neutrino radiation. Kiuchi and his collaborators reported a new implementation for advanced HLLD Riemann solver <cit.> and applied it to a magnetized binary neutron star merger with neutrino physics <cit.>. In the context of the accretion disk, it has been well known the HLL(E) Riemann solver [It is known that HLL(E) and Local-Lax-Friedrich (LLF) Riemann solver <cit.> give the essentially same result.] commonly used in the numerical relativity community is very diffusive <cit.>. They explored how the less diffusive HLLD Riemann solver affects the post-merger evolution. They embedded a large-scale poloidal magnetic field of 10^15  G inside a massive torus formed after a short-lived remnant massive neutron star collapses into a black hole. The employed grid resolution is Δ x=150  m. The neutron star is modeled with the SFHo equation of state <cit.>. Their findings are summarized as follows: (i) The small-scale magneto-turbulence due to the MRI is not able to be sustained if we employ the HLLE solver because of inherently large numerical diffusivity. (ii) As a result, the large-scale magnetic field of ∼ 10  km is artificially enhanced in the simulation with the HLLE Riemann solver compared to that with the HLLD Riemann solver (see Fig. 26 in Ref. <cit.>). (iii) More energetic (but spurious) Poynting flux-dominated outflow is launched in the simulation with the HLLE Riemann solver compared to that with the HLLD Riemann solver (see Fig. 27 in Ref. <cit.>). The caveat is that they only implemented the third-order piecewise parabolic method for the reconstruction in the Riemann problem <cit.>. Therefore, they did not quantify how the HLLE Riemann solver with a higher-order reconstruction such as the MP5 <cit.> would reduce the numerical diffusion in binary neutron star merger simulations. Combi and Siegel reported a new magnetized binary neutron star merger simulations with neutrino physics <cit.>. They employed finite temperature nuclear equation of states, LS220, SFHo, and APR <cit.> and a zeroth-momentum (M0) scheme based on a ray-by-ray for the neutrino radiation transport <cit.>. The pre-merger magnetic field is 5× 10^15  G. The employed grid resolution is Δ x=180  m and 220  m. They reported that the magnetic field does not significantly impact the dynamical ejecta. They also reported that the MRI is fully developed inside the remnant massive neutron star and facilitates the angular momentum transport. The caveat is as follows: (i) Many previous works reported the efficient Kelvin-Helmholtz instability and fully resolving MRI are challenging with their setup <cit.>. (ii) Because of the shortness of their simulation timescale of ≈ 10  ms, the MHD effect, particularly the role of the angular momentum transport inside the remnant, needs to be clarified. Note that the MRI-driven viscous timescale or the magnetic winding timescale is longer than their simulation time. Also, the detailed analysis on the MRI is necessary. De Haas and his collaborators reported magnetized binary neutron star merger simulations with neutrino physics <cit.>, a follow-up work of Ref. <cit.>. They added a dipole-like large-scale magnetic field to a remnant massive neutron star: A_r=A_θ=0, A_ϕ=B_0sinθr_ falloff^3/r_ falloff^3+r^3, and they varied B_0=10^13  G, 10^14  G, 10^15  G, 5×10^15  G and r_ falloff=5  km, 10  km, 15  km, 20  km. The employed grid resolution is Δ x=185  m. The neutron star is modeled with the LS220 equation of state <cit.>. They employed the leakage scheme for the neutrino cooling <cit.> and parameterized prescription for the neutrino heating. They found in the two simulations with (B_0,r_ falloff)=(10^15  G,20  km) and (5× 10^15  G,10  km) the jet is launched and the velocity distribution of the ejecta has a fast component with 0.4–0.6  c. The caveat is that (i) the role of the MRI, particularly low magnetic field cases, needs to be clarified more, and (ii), as they mentioned, the large-scale field inside the merger remnant is an assumption. Kiuchi and his collaborators reported new magnetized binary neutron star merger simulations with neutrino physics <cit.>. The pre-merger magnetic field is 10^15  G. The employed grid resolution is Δ x=150  m and 200  m for the convergence test. The neutron star is modeled with the SFHo equation of state <cit.>. They performed a simulation up to one second after the merger (see Fig. <ref> for the final snapshot on a meridional slice). Their findings are summarized as follows: (i) After the short-lived massive neutron star collapses into a black hole, the magnetic field inside the massive torus is amplified by the axisymmetric MRI and magnetic winding. As a result, at ∼ 0.1  s after the merger, the fully turbulent state due to the MRI is established, and the turbulent viscosity with the Shakura-Sunyaev parameter of 0.01–0.03 facilitates the angular momentum transport. The turbulent state is sustained by the MRI dynamo, proved by the butterfly diagram lasting until the end of the simulation (see Fig. 3 in Ref. <cit.>). (ii) Due to the angular momentum transport and turbulent viscous heating, the torus expands, and the temperature drops. As a result, the neutrino cooling becomes inefficient. (iii) Finally, the post-merger mass ejection due to the MRI-driven turbulent viscosity sets in, and the mass of ≈ 8× 10^-3M_⊙ is ejected from the torus (see Fig. <ref> for the detailed properties of the ejecta). (iv) The jet launching is not observed until the end of the simulation of ≈ 1  s after the merger. (v) The convergence in terms of the grid resolution is almost achieved, implying the simulation quality could be good enough to compare to the observational data such as AT 2017gfo. The caveat is summarized as follows: (i) Since with their setup, it is impossible to resolve an efficient Kelvin-Helmholtz instability and the axisymmetric MRI inside the short-lived remnant massive neutron star whose lifetime is ≈ 17  ms, there is a possibility that the large-scale magnetic field is build up before the black hole formation. (ii) Although the fallback motion lasts and resultant ram pressure overcomes the magnetic pressure until the end of the simulation of ≈ 1  s, the fallback time scale could depend on the binary neutron star model because the other works employing a different binary neutron star model reported the fallback motion ceases at O(0.01)  s after the black hole formation (see Ref. <cit.> for example). (iii) The absence of the jet launching could be caused by spurious spin down of a black hole due to an insufficient grid resolution. The resolution study with the Cowling approximation suggested it is unlikely, though. Chabanov and his collaborators explored pre-merger magnetic field topology in magnetized binary neutron star mergers <cit.>. The pre-merger magnetic field is a traditional confined dipole field with ∼ 10^14  G and “crustal" field with ∼ 2× 10^14  G. The employed grid resolution is Δ x=70  m and 105  m for the convergence test. The neutron star is modeled with the TNTYST equation of state <cit.>. Their finding is summarized as follows: (i) The amplification process comprises the Kelvin-Helmholtz instability phase, the subsequent decay phase, and the subsequent turbulent phase. (ii) At the end of the Kelvin-Helmholtz instability phase, the magnetic field energy for the “crustal" configuration is smaller by a factor of few than the confined configuration. At the end of the turbulent phase, the former is smaller by a one-order magnitude than the latter. They conclude that the “crustal" configuration leads to inefficient Kelvin-Helmholtz instability compared to the confined configuration widely used in magnetized binary neutron star mergers. The caveat is that the saturation energy of the magnetic field for the “crustal" configuration needs to be more extensively explored since the high-resolution simulation shows a significant growth rate, implying the physical saturation could be far from what they reported. Most and Quataert reported new magnetized binary neutron star merger simulations with neutrino physics and a sub-grid model to reproduce a large-scale dynamo <cit.>. Specifically, their sub-grid model is α dynamo described by e^μ = κ b^μ, where e^μ and b^μ are the electric and magnetic fields in a fluid comoving frame. In the non-relativistic ideal MHD approximation, e^i=E^i+( V× B)^i=0 (see Eqs. (<ref>)–(<ref>)). κ is a calibration parameter such that the saturation value of the magnetization parameter σ=b^2/ρ is controlled. They performed two cases with σ≈ 0.01 and ≈ 0.001. The pre-merger magnetic field is 10^15  G. The employed grid resolution is Δ x =250  m. The neutron star is modeled by the DD2 and APR equation of sates <cit.>. The leakage scheme incorporates neutrino cooling. Figure <ref> summarizes their result. Due to the α dynamo prescription, the strong magnetic field of ∼ 10^17  G is produced, and the resultant magnetic buoyancy force pushes the fluid elements upward near the surface (left panel). A strongly magnetized loop that sticks out of the stellar surface is formed. It is twisted by a differential rotation of the merger remnant and inflate (center panel). As the twist is increased, the inflated bubble detaches from the merger remnant (right panel). Since this process repeats, a periodicity is imprinted in the Poynting flux as shown in Fig. <ref>. The luminosity and the periodicity strongly depend on the equation of state and the dynamo calibration parameter κ. The caveat is as follows: (i) More detailed calibration κ or detailed modeling of the dynamo prescription is necessary because the result is susceptible to it. (ii) The role of the MRI needs to be more clarified. Combi and Siegel reported new magnetized binary neutron star merger simulations with neutrino physics <cit.>. The pre-merger magnetic field is 3× 10^15  G. The employed grid resolution is Δ x=180  m. The neutron star is modeled with the APR equation of state <cit.>. The neutrino treatment is the same as the previous work <cit.>. Their finding is summarized as follows: (i) After the Kelvin-Helmholtz phase, the inverse turbulent cascade creates a large-scale magnetic field. (ii) The large-scale toroidal magnetic field is further amplified, and an incipient jet is launched. Also, the post-merger mass ejection due to the MHD effect sets in. The caveat is as follows: (i) Since many previous simulations revealed that an efficient Kelvin-Helmholtz instability and the MRI are hard to be resolved by their setup, the mechanism to generate the large-scale magnetic field must be clarified more. (ii) Because of the lack of a resolution study, it is hard to quantify systematic errors in their findings, such as electromagnetic field luminosity and post-merger mass ejection, due to the grid resolution. This point is crucial to compare the simulation result with the observation data. Kiuchi and his collaborators reported new magnetized binary neutron star merger simulations with neutrino physics <cit.>. The pre-merger magnetic field is 3× 10^15  G. The employed grid resolution is Δ x=12.5  m from the inspiral to the first ≈ 30  ms after the merger, subsequently Δ x =50  m up to ≈ 50  ms after the merger, and finally Δ x=100  m until the end of the simulation of 175  ms after the merger (see Method section in Ref. <cit.> for detail). They also performed the convergence test with Δ x=200  m. The neutron star is modeled with the DD2+Timmes equation of state <cit.>, which results in the long-lived remnant massive neutron star formation. The neutrino radiation transport is solved with the M1+GR Leakage scheme. Their findings are summarized as follows: (i) As reported previously <cit.>, the low resolution with Δ x=200  m is unable to capture the efficient Kelvin-Helmholtz instability and the MRI (see Extended Data Figs. 1 and 2 in Ref. <cit.>). The high-resolution simulation with Δ x=12.5  m can sustain the MRI-driven turbulence. (ii) The neutrino viscosity and drag are likely to be irrelevant in binary neutron star mergers because of the efficient Kelvin-Helmholtz instability (see Extended Data Fig. 3 in Ref. <cit.>, which solved the dispersion relations (<ref>)–(<ref>) on top of the simulation data). (iii) Because the MRI-driven turbulence is responsible for generating the electromotive force (<ref>) and the period in the butterfly diagram agrees with the prediction by the αΩ dynamo theory, P_ theory=2π|1/2α_ϕϕdΩ/dln Rk_z|^-1/2, where k_z is the wavenumber corresponding to the pressure scale height (see also Eq. (<ref>) for α_ϕϕ), the αΩ dynamo can be interpreted as a mechanism for the large-scale magnetic field generation as shown in Figs. <ref>–<ref>[The working hypothesis to derive Eqs. (<ref>)–(<ref>) is directly verified in Ref. <cit.>]. (iv) The pre-merger large-scale magnetic field is harmless for the large-scale dynamo because such a relic magnetic field stays buried deep inside the merger remnant core throughout the simulation, and the dynamo wave appears from the surface of the remnant core (see Extended Data Fig. 5 in Ref. <cit.>). (v) The relativistic Poynting-flux dominated outflow with the luminosity of ∼ 10^51  erg s^-1 is launched by the large-scale magnetic field due to the αΩ dynamo. Also, the Lorentz force due to the large-scale field drives an enormous post-merger mass ejection of ≈ 0.1M_⊙ as shown in Fig. <ref>. It should be noted that the low-resolution simulation with Δ x=200  m underestimates the luminosity of the Poynting flux by two orders of magnitude and the post-merger ejecta mass by one order of magnitude at ≈ 100  ms after the merger, which corresponds to a “longest"-term simulation among the previous simulations conducted to date for the long-lived case. Therefore, the systematic error due to the grid resolution is astonishingly large, which is crucial for the comparison to the observational data such as AT 2017gfo. The caveat is that (i) it is necessary to confirm this picture by a simulation starting from a much weaker pre-merger magnetic field since, as claimed in Refs. <cit.>, the MRI may not be effective at least early post-merger phase because of the absence of the static and large-scale background field. (ii) The other dynamo such as the Taylor-Spruit dynamo <cit.> could be effective in generating the large-scale magnetic field, particularly deep inside the merger remnant core because such a region is not subject to the MRI due to the positive radial gradient of the angular velocity. (iii) Since the magnetic Prandtl number determined by the numerical viscosity and resistivity is an order of unity in their simulation, the large-scale dynamo property may change in the high magnetic Prandtl number regime. Aguilera-Miret and his collaborators reported new magnetized binary neutron star merger simulations with the gradient sub-grid scale model <cit.>. The pre-merger magnetic field strength is 5× 10^11  G. They also performed a follow-up simulation with the “crustal" configuration proposed in Ref. <cit.>. The employed grid resolution is Δ x=60  m. The neutron star is modeled by the APR equation of state <cit.>. The remnant massive neutron star is long-lived. The pre-factor of the sub-grid model is (C_M,C_N,C_T)=(8,0,0) with the cut-off density of 2× 10^11  g cm^-3 below which the sub-grid terms are turned off. As consistent with their previous works <cit.>, the Kelvin-Helmotholz instability triggers the turbulent magnetic field amplification up to ∼ 10^50  erg until ∼ 5  ms after the merger. The energy spectrum analysis suggests that isotropic turbulence results in comparable strength in the poloidal and toroidal magnetic fields. After the Kelvin-Helmholtz instability phase, the turbulent resistivity is enhanced. As a result, the small-scale magnetic field is diffused, and the magnetic field with a coherent length of a few km is developed. Because of the resultant coherent poloidal magnetic field, the magnetic winding works as a further amplification of the toroidal field, and the magnetic field energy ends up at ∼ 10^51  erg at ∼ 110  ms after the merger. Although they observed a helicoidal structure of the magnetic field, they did not find a jet launching until the end of the simulation of ≈ 110  ms after the merger. The MRI potentially unstable region inside the remnant massive neutron star has a highly non-axisymmetric intensity, implying the prediction of λ_ MRI is a non-trivial task because the classical and widespread way of evaluation of λ_ MRI assumes a background and homogeneous field. Even starting with the “crustal" configuration, they found a similar saturation energy of ∼ 10^50  erg at the end of the Kelvin-Helmholtz instability phase, which may imply the result in Ref. <cit.> could be merely caused by an insufficient resolution. The caveat is that (i) there is still room for the MRI investigation because they did not explicitly show that the MRI does not emerge in their simulation, although the MRI diagnostics, such as the MRI quality factor or the Maxwell stress, indicate the emergence of the MRI. (ii) The role of the turbulent resistivity needs to be more clarified because of the inherently large numerical resistivity of their Local-Lax-Freidrich Riemann solver (see Figs. 12 and 13 in Ref. <cit.> for the magnetic reconnection problem with different Riemann solvers). (iii) The αΩ dynamo's role needs to be clarified, particularly a correlation between the electromotive force and mean field as demonstrated in Ref. <cit.>. Most proposed a new sub-grid model for the αΩ dynamo in magnetized binary neutron star merger context <cit.>. By assuming the dynamo effects grow relative to the resistive timescale in the Ohm's law, he arrived at a tensorial relation between the electric field e^μ and magnetic field b^μ in the fluid co-moving frame: e^μ = κ^μ_ν b^ν, where κ^μ_ν=-ηα^μ_ν with the resistivity η and dynamo alpha (see also Eq. (<ref>)). This equation is furthermore simplified by assuming that κ^μ_ν=κ(δ^μ_ν+u^μ u_ν) where u^μ is a fluid four velocity: e^μ=κ b^μ. The dynamo coefficient κ and the dynamo saturation are inspired by high-resolution magnetized binary neutron star merger simulations <cit.>: κ=κ_ HMNSmax(0,Δ_ turb), Δ_ turb = 1 - σ/σ_ turb, σ_ turb = ξ(l^ HMNS_ MRI)^2 (Δ x/12.5  m)^2 σ^μνσ_μν, l^ HMNS_ MRI = max(0,a log_10(ρ/ρ_*) exp[-|blog_10(ρ/ρ_*)|^5/2])  m, where κ_ HMNS≈ 0.025-0.035 is the dynamo parameter inferred from the ultra-high resolution simulation <cit.>, σ is the magnetization parameter, ξ is a parameter, σ_μν is the shear tensor, and l^ HMNS_ MRI is the MRI wavelength inside the remnant massive neutron star. An important assumption here is that the αΩ dynamo will terminate once a fraction ξ of the turbulent kinetic energy is converted into the magnetic field energy. l^ HMNS_ MRI is fitted by the global simulation in Ref. <cit.> with a=22.31984, b=-0.425832, and ρ_*=1.966769× 10^9  g cm^-3 <cit.>. He left ξ as a free parameter and performed a simulation with ξ=0.04, 0.4, and 4 [In principle, ξ should be smaller than unity.]. The pre-merger magnetic field is 10^15  G. The employed grid resolution is Δ x=200  m. The result is qualitatively similar to his previous work <cit.> (see Fig. <ref> and its explanation). Figure <ref> shows how the choice of ξ results in the luminosity for the Poynting flux, and it indicates ξ=4 is closed to those found in Ref. <cit.> (see Fig. <ref>). The caveat is that a further calibration for the sub-grid model for the αΩ dynamo is necessary because the result is sensitive to the choice of ξ. § SUMMARY AND PROSPECT GRMHD simulations in binary neutron star mergers have rapidly progressed in the last sixteen years. At the initial phase, it was unclear whether the Kelvin-Helmholtz instability could efficiently amplify the magnetic field at the merger, which was originally reported in the Newtonian Smoothed Particle Hydrodynamics simulation <cit.>. Currently, the numerical relativity community has a consensus on this picture: the Kelvin-Helmholtz instability develops strongly magnetized turbulence in a short timescale <5  ms at the merger. However, the physical saturation of the Kelvin-Helmholtz instability with B≳ 10^15-16  G needs to be investigated further. Also, the way in which the large-scale magnetic field inside the merger remnant builds up needs to be clarified more. A couple of simulations suggest if the large-scale magnetic field is established before the remnant collapses into the black hole, jet launching could be possible. However, the numerical relativity community keeps asking the question: Is it a relic of the large-scale pre-merger magnetic field? Or does a non-trivial physical process work to generate the large-scale magnetic field from the small-scale magnetic field? Ultimately, the community has to answer a question: does a merger simulation starting from a strong and large-scale pre-merger magnetic field with “standard" grid resolution lead to a physically equivalent outcome of a simulation starting from a weak and large-scale pre-merger magnetic field with “high" grid resolution? A recent demonstration of the MRI-driven αΩ dynamo inside the remnant massive neutron star could resolve a piece of the puzzle on this issue. The lesson from it is that we need an in-depth resolution study and a novel analysis to disentangle the large-scale field generated by the non-trivial process from the relic large-scale field. With them, it is possible to quantify how the assumed large-scale pre-merger field affects the simulation outcome. Otherwise, it is impossible to reject the possibility that the outcome is merely a consequence of the unrealistically large-scale pre-merger magnetic field. It should be also emphasized that the resolution study is essential to build a physical theoretical model, which should be compared to the observables such as AT 2017gfo and GRB170817A because a higher resolution simulation makes more than one order of magnitude difference in the quantities relevant to the electromagnetic counterpart modeling as demonstrated in Ref. <cit.>. Also, this work brings up a new question: What is a realistic time scale to build up the large-scale magnetic field via αΩ dynamo? Is there any other possibilities for the large-scale dynamo such as the Taylor-Spruit dynamo? Since Ref. <cit.> pointed out that the growth timescale for the large-scale magnetic field generation approximately agrees with the period of the dynamo cycle, i.e., the butterfly diagram. However, the mechanism to set the large-scale field strength just after the merger is an open question. A conservative estimate based on the merged poloidal magnetic flux of the pre-merger magnetic field with 10^12  G, i.e., highly magnetized end of the binary pulsars, gives the time scale of O(0.1)  s for the large-scale magnetic field to build up <cit.>. This implies that if a remnant massive neutron star survives longer than this time scale, it could be a central engine of short gamma-ray bursts. The other possibility is the large-scale dynamo inside a torus formed after a remnant massive neutron star collapses into a black hole <cit.>. Once the large-scale field is established inside the torus, the Blandford-Znajek mechanism could drive a relativistic jet from the black hole <cit.>. However, it is an open question whether or not the large-scale field strong enough to extract the black hole rotational energy efficiently is possible via the disk dynamo. The numerical relativity community will continue exploring these possibilities by more sophisticated simulations. The numerical relativity community does not have a consensus on the MRI inside the merger remnant massive neutron star. The gradient sub-grid scale model simulations starting from a realistic pre-merger magnetic field strength suggest that the remnant massive neutron star has a high non-axisymmetric intensity, implying the conventional way to estimate λ_ MRI may not be valid <cit.>, which many simulations starting from the large-scale pre-merger magnetic field rely on to quantify to what extent the simulations resolve the MRI (see Ref. <cit.> for example). However, it does not necessarily mean whether or not the MRI sets in inside the merger remnant, but it means we need to seek a robust way to quantify the MRI's emergence or non-emergence, particularly the MRI's non-linear phase. One potential proposal is to measure the Shakura-Sunyaev parameter or the Maxwell stress. Furthermore, the diagnostics proposed in Ref. <cit.> could reasonably estimate how the MRI-driven turbulence is sustained. It should be noted that in the non-linear phase of the MRI, the magnetic field is strongly turbulent. It must have a high non-axisymmetric intensity and no static, large-scale background. Also, evaluating the mean poloidal field in the MRI active and inactive region could be another way to quantify the emergence of the MRI as demonstrated in Ref. <cit.>. The numerical relativity community has a consensus that grid resolution is essential for GRMHD simulations of binary neutron star mergers. However, the effective resolution could be determined by combining the quality of the Riemann solver and cell-reconstruction scheme. Most of the existing NR codes employ the HLL(E) or its variant LLF Riemann solver with a higher-order cell reconstruction scheme, such as MP5, except for Ref. <cit.> which employs the HLLD Riemann solver. Quantifying the effective grid resolution with the different Riemann solvers and cell-reconstruction schemes is a task that needs to be pursued in the future. In summary, numerical modeling of binary neutron star mergers based on GRMHD simulations will continue playing a pivotal role in the multimessenger era. The numerical relativity community will keep making an effort to develop physical modeling for binary neutron star mergers, whose quality is quantitatively good enough to compare to observational data, not only the gravitational waves but also the electromagnetic signals. K.K. thanks to the Computational Relativistic Astrophysics members in AEI for a stimulating discussion. K.K. also thanks to Luciano Rezzolla, Bruno Giacomazzo, Carlos Palenzuela, Riccardo Ciolfi, Milton Ruiz, Elias Most, Ricard Aguilera-Miret, and Luciano Combi for their feedback to the draft. This work is in part supported by the Grant-in-Aid for Scientific Research (grant Nos. 23H01172) of Japan MEXT/JSPS, and by the HPCI System Research Project (Project ID: hp220174, hp220392, hp230204, hp230084, hp240039). 99 Aguilera-Miret:2020dhz R. Aguilera-Miret, D. Viganò, F. Carrasco, B. Miñano and C. Palenzuela, Phys. Rev. D 102 (2020) no.10, 103006 doi:10.1103/PhysRevD.102.103006 [arXiv:2009.06669 [gr-qc]]. Aguilera-Miret:2021fre R. Aguilera-Miret, D. Viganò and C. Palenzuela, Astrophys. J. Lett. 926 (2022) no.2, L31 doi:10.3847/2041-8213/ac50a7 [arXiv:2112.08406 [gr-qc]]. Aguilera-Miret:2023qih R. Aguilera-Miret, C. Palenzuela, F. Carrasco and D. Viganò, Phys. Rev. D 108 (2023) no.10, 103001 doi:10.1103/PhysRevD.108.103001 [arXiv:2307.04837 [astro-ph.HE]]. Akmal:1998cf A. Akmal, V. R. Pandharipande and D. G. Ravenhall, Phys. Rev. C 58 (1998), 1804-1828 doi:10.1103/PhysRevC.58.1804 [arXiv:nucl-th/9804027 [nucl-th]]. Amiram:2006zjz H. Amiram, L. Peter D. and L. Bram van, SIAM Rev. 25 (1983) no.1, 35-61 doi:10.1137/1025002 Anderson:2008zp M. Anderson, E. W. Hirschmann, L. Lehner, S. L. Liebling, P. M. Motl, D. Neilsen, C. Palenzuela and J. E. Tohline, Phys. Rev. Lett. 100 (2008), 191101 doi:10.1103/PhysRevLett.100.191101 [arXiv:0801.4387 [gr-qc]]. Balbus:1991ay S. A. Balbus and J. F. Hawley, Astrophys. J. 376 (1991), 214-233 doi:10.1086/170270 Balbus:1998ja S. A. Balbus and J. F. Hawley, Rev. Mod. Phys. 70 (1998), 1-53 doi:10.1103/RevModPhys.70.1 Bauswein:2013jpa A. Bauswein, T. W. Baumgarte and H. T. Janka, Phys. Rev. Lett. 111 (2013) no.13, 131101 doi:10.1103/PhysRevLett.111.131101 [arXiv:1307.5191 [astro-ph.SR]]. Bauswein:2017vtn A. Bauswein, O. Just, H. T. Janka and N. Stergioulas, Astrophys. J. Lett. 850 (2017) no.2, L34 doi:10.3847/2041-8213/aa9994 [arXiv:1710.06843 [astro-ph.HE]]. Blandford:1977ds R. D. Blandford and R. L. Znajek, Mon. Not. Roy. Astron. Soc. 179 (1977), 433-456 doi:10.1093/mnras/179.3.433 Brandenburg:2004jv A. Brandenburg and K. Subramanian, Phys. Rept. 417 (2005), 1-209 doi:10.1016/j.physrep.2005.06.005 [arXiv:astro-ph/0405052 [astro-ph]]. Breu:2016ufb C. Breu and L. Rezzolla, Mon. Not. Roy. Astron. Soc. 459 (2016) no.1, 646-656 doi:10.1093/mnras/stw575 [arXiv:1601.06083 [gr-qc]]. Carrasco:2019uzl F. Carrasco, D. Viganò and C. Palenzuela, Phys. Rev. D 101 (2020) no.6, 063003 doi:10.1103/PhysRevD.101.063003 [arXiv:1908.01419 [astro-ph.HE]]. Chabanov:2022twz M. Chabanov, S. D. Tootle, E. R. Most and L. Rezzolla, Astrophys. J. Lett. 945 (2023) no.1, L14 doi:10.3847/2041-8213/acbbc5 [arXiv:2211.13661 [astro-ph.HE]]. Christie:2019lim I. M. Christie, A. Lalakos, A. Tchekhovskoy, R. Fernández, F. Foucart, E. Quataert and D. Kasen, Mon. Not. Roy. Astron. Soc. 490 (2019) no.4, 4811-4825 doi:10.1093/mnras/stz2552 [arXiv:1907.02079 [astro-ph.HE]]. Ciolfi:2017uak R. Ciolfi, W. Kastaun, B. Giacomazzo, A. Endrizzi, D. M. Siegel and R. Perna, Phys. Rev. D 95 (2017) no.6, 063016 doi:10.1103/PhysRevD.95.063016 [arXiv:1701.08738 [astro-ph.HE]]. Ciolfi:2019fie R. Ciolfi, W. Kastaun, J. V. Kalinani and B. Giacomazzo, Phys. Rev. D 100 (2019) no.2, 023005 doi:10.1103/PhysRevD.100.023005 [arXiv:1904.10222 [astro-ph.HE]]. Ciolfi:2020hgg R. Ciolfi, Mon. Not. Roy. Astron. Soc. 495 (2020) no.1, L66-L70 doi:10.1093/mnrasl/slaa062 [arXiv:2001.10241 [astro-ph.HE]]. Ciolfi:2020cpf R. Ciolfi, Gen. Rel. Grav. 52 (2020) no.6, 59 doi:10.1007/s10714-020-02714-x [arXiv:2003.07572 [astro-ph.HE]]. Cipolletta:2019geh F. Cipolletta, J. V. Kalinani, B. Giacomazzo and R. Ciolfi, Class. Quant. Grav. 37 (2020) no.13, 135010 doi:10.1088/1361-6382/ab8be8 [arXiv:1912.04794 [astro-ph.HE]]. Combi:2022nhg L. Combi and D. M. Siegel, Astrophys. J. 944 (2023) no.1, 28 doi:10.3847/1538-4357/acac29 [arXiv:2206.03618 [astro-ph.HE]]. Combi:2023yav L. Combi and D. M. Siegel, Phys. Rev. Lett. 131 (2023) no.23, 231402 doi:10.1103/PhysRevLett.131.231402 [arXiv:2303.12284 [astro-ph.HE]]. Colella:1984 P. Colella and P. R. Woodward, J. Comput. Phys. 54 (1984), 174 Cook:1993qr G. B. Cook, S. L. Shapiro and S. A. Teukolsky, Astrophys. J. 424 (1994), 823 doi:10.1086/173934 Dionysopoulou:2012zv K. Dionysopoulou, D. Alic, C. Palenzuela, L. Rezzolla and B. Giacomazzo, Phys. Rev. D 88 (2013), 044020 doi:10.1103/PhysRevD.88.044020 [arXiv:1208.3487 [gr-qc]]. Dionysopoulou:2015tda K. Dionysopoulou, D. Alic and L. Rezzolla, Phys. Rev. D 92 (2015) no.8, 084064 doi:10.1103/PhysRevD.92.084064 [arXiv:1502.02021 [gr-qc]]. Douchin:2001sv F. Douchin and P. Haensel, Astron. Astrophys. 380 (2001), 151 doi:10.1051/0004-6361:20011402 [arXiv:astro-ph/0111092 [astro-ph]]. Duez:2005sf M. D. Duez, Y. T. Liu, S. L. Shapiro and B. C. Stephens, Phys. Rev. D 72 (2005), 024028 doi:10.1103/PhysRevD.72.024028 [arXiv:astro-ph/0503420 [astro-ph]]. Duez:2005cj M. D. Duez, Y. T. Liu, S. L. Shapiro, M. Shibata and B. C. Stephens, Phys. Rev. Lett. 96 (2006), 031101 doi:10.1103/PhysRevLett.96.031101 [arXiv:astro-ph/0510653 [astro-ph]]. Duez:2006qe M. D. Duez, Y. T. Liu, S. L. Shapiro and M. Shibata, Phys. Rev. D 73 (2006), 104015 doi:10.1103/PhysRevD.73.104015 [arXiv:astro-ph/0605331 [astro-ph]]. Endrizzi:2016kkf A. Endrizzi, R. Ciolfi, B. Giacomazzo, W. Kastaun and T. Kawamura, Class. Quant. Grav. 33 (2016) no.16, 164001 doi:10.1088/0264-9381/33/16/164001 [arXiv:1604.03445 [astro-ph.HE]]. Etienne:2010ui Z. B. Etienne, Y. T. Liu and S. L. Shapiro, Phys. Rev. D 82 (2010), 084031 doi:10.1103/PhysRevD.82.084031 [arXiv:1007.2848 [astro-ph.HE]]. Etienne:2015cea Z. B. Etienne, V. Paschalidis, R. Haas, P. Mösta and S. L. Shapiro, Class. Quant. Grav. 32 (2015), 175009 doi:10.1088/0264-9381/32/17/175009 [arXiv:1501.07276 [astro-ph.HE]]. Fernandez:2018kax R. Fernández, A. Tchekhovskoy, E. Quataert, F. Foucart and D. Kasen, Mon. Not. Roy. Astron. Soc. 482 (2019) no.3, 3373-3393 doi:10.1093/mnras/sty2932 [arXiv:1808.00461 [astro-ph.HE]]. Foucart:2022bth F. Foucart, Liv. Rev. Comput. Astrophys. 9 (2023) no.1, 1 doi:10.1007/s41115-023-00016-y [arXiv:2209.02538 [astro-ph.HE]]. Giacomazzo:2007ti B. Giacomazzo and L. Rezzolla, Class. Quant. Grav. 24 (2007), S235-S258 doi:10.1088/0264-9381/24/12/S16 [arXiv:gr-qc/0701109 [gr-qc]]. Giacomazzo:2009mp B. Giacomazzo, L. Rezzolla and L. Baiotti, Mon. Not. Roy. Astron. Soc. 399 (2009), L164-L168 doi:10.1111/j.1745-3933.2009.00745.x [arXiv:0901.2722 [gr-qc]]. Giacomazzo:2010bx B. Giacomazzo, L. Rezzolla and L. Baiotti, Phys. Rev. D 83 (2011), 044014 doi:10.1103/PhysRevD.83.044014 [arXiv:1009.2468 [gr-qc]]. Giacomazzo:2013uua B. Giacomazzo and R. Perna, Astrophys. J. Lett. 771 (2013), L26 doi:10.1088/2041-8205/771/2/L26 [arXiv:1306.1608 [astro-ph.HE]]. Giacomazzo:2014qba B. Giacomazzo, J. Zrake, P. Duffell, A. I. MacFadyen and R. Perna, Astrophys. J. 809 (2015) no.1, 39 doi:10.1088/0004-637X/809/1/39 [arXiv:1410.0013 [astro-ph.HE]]. Glendenning:1991es N. K. Glendenning and S. A. Moszkowski, Phys. Rev. Lett. 67 (1991), 2414-2417 doi:10.1103/PhysRevLett.67.2414 Goldstein:2017mmi A. Goldstein, P. Veres, E. Burns, M. S. Briggs, R. Hamburg, D. Kocevski, C. A. Wilson-Hodge, R. D. Preece, S. Poolakkil and O. J. Roberts, et al. Astrophys. J. Lett. 848 (2017) no.2, L14 doi:10.3847/2041-8213/aa8f41 [arXiv:1710.05446 [astro-ph.HE]]. Gottlieb:2023sja O. Gottlieb, B. D. Metzger, E. Quataert, D. Issa, T. Martineau, F. Foucart, M. D. Duez, L. E. Kidder, H. P. Pfeiffer and M. A. Scheel, Astrophys. J. Lett. 958 (2023) no.2, L33 doi:10.3847/2041-8213/ad096e [arXiv:2309.00038 [astro-ph.HE]]. Guilet:2016sqd J. Guilet, A. Bauswein, O. Just and H. T. Janka, Mon. Not. Roy. Astron. Soc. 471 (2017) no.2, 1879-1887 doi:10.1093/mnras/stx1739 [arXiv:1610.08532 [astro-ph.HE]]. deHaas:2022ytm S. de Haas, P. Bosch, P. Mösta, S. Curtis and N. Schut, Mon. Not. Roy. Astron. Soc. 527 (2023) no.2, 2240-2250 doi:10.1093/mnras/stad2931 [arXiv:2208.05330 [astro-ph.HE]]. Hawley:2011tq J. F. Hawley, X. Guan and J. H. Krolik, Astrophys. J. 738 (2011), 84 doi:10.1088/0004-637X/738/1/84 [arXiv:1103.5987 [astro-ph.HE]]. Hempel:2009mc M. Hempel and J. Schaffner-Bielich, Nucl. Phys. A 837 (2010), 210-254 doi:10.1016/j.nuclphysa.2010.02.010 [arXiv:0911.4073 [nucl-th]]. Hotokezaka:2011dh K. Hotokezaka, K. Kyutoku, H. Okawa, M. Shibata and K. Kiuchi, Phys. Rev. D 83 (2011), 124008 doi:10.1103/PhysRevD.83.124008 [arXiv:1105.4370 [astro-ph.HE]]. Hotokezaka:2012ze K. Hotokezaka, K. Kiuchi, K. Kyutoku, H. Okawa, Y. i. Sekiguchi, M. Shibata and K. Taniguchi, Phys. Rev. D 87 (2013), 024001 doi:10.1103/PhysRevD.87.024001 [arXiv:1212.0905 [astro-ph.HE]]. Kashyap:2021wzs R. Kashyap, A. Das, D. Radice, S. Padamata, A. Prakash, D. Logoteta, A. Perego, D. A. Godzieba, S. Bernuzzi and I. Bombaci, et al. Phys. Rev. D 105 (2022) no.10, 103022 doi:10.1103/PhysRevD.105.103022 [arXiv:2111.05183 [astro-ph.HE]]. Kawamura:2016nmk T. Kawamura, B. Giacomazzo, W. Kastaun, R. Ciolfi, A. Endrizzi, L. Baiotti and R. Perna, Phys. Rev. D 94 (2016) no.6, 064012 doi:10.1103/PhysRevD.94.064012 [arXiv:1607.01791 [astro-ph.HE]]. Kiuchi:2012qv K. Kiuchi, K. Kyutoku and M. Shibata, Phys. Rev. D 86 (2012), 064008 doi:10.1103/PhysRevD.86.064008 [arXiv:1207.6444 [astro-ph.HE]]. Kiuchi:2014hja K. Kiuchi, K. Kyutoku, Y. Sekiguchi, M. Shibata and T. Wada, Phys. Rev. D 90 (2014), 041502 doi:10.1103/PhysRevD.90.041502 [arXiv:1407.2660 [astro-ph.HE]]. Kiuchi:2015sga K. Kiuchi, P. Cerdá-Durán, K. Kyutoku, Y. Sekiguchi and M. Shibata, Phys. Rev. D 92 (2015) no.12, 124034 doi:10.1103/PhysRevD.92.124034 [arXiv:1509.09205 [astro-ph.HE]]. Kiuchi:2017zzg K. Kiuchi, K. Kyutoku, Y. Sekiguchi and M. Shibata, Phys. Rev. D 97 (2018) no.12, 124039 doi:10.1103/PhysRevD.97.124039 [arXiv:1710.01311 [astro-ph.HE]]. Kiuchi:2022ubj K. Kiuchi, L. E. Held, Y. Sekiguchi and M. Shibata, Phys. Rev. D 106 (2022) no.12, 124041 doi:10.1103/PhysRevD.106.124041 [arXiv:2205.04487 [astro-ph.HE]]. Kiuchi:2022nin K. Kiuchi, S. Fujibayashi, K. Hayashi, K. Kyutoku, Y. Sekiguchi and M. Shibata, Phys. Rev. Lett. 131 (2023) no.1, 011401 doi:10.1103/PhysRevLett.131.011401 [arXiv:2211.07637 [astro-ph.HE]]. Kiuchi:2023obe K. Kiuchi, A. Reboul-Salze, M. Shibata and Y. Sekiguchi, Nature Astron. 8 (2024) no.3, 298-307 doi:10.1038/s41550-024-02194-y [arXiv:2306.15721 [astro-ph.HE]]. Kolsch:2021lub M. Kölsch, T. Dietrich, M. Ujevic and B. Bruegmann, Phys. Rev. D 106 (2022) no.4, 044026 doi:10.1103/PhysRevD.106.044026 [arXiv:2112.11851 [gr-qc]]. Koppel:2019pys S. Köppel, L. Bovard and L. Rezzolla, Astrophys. J. Lett. 872 (2019) no.1, L16 doi:10.3847/2041-8213/ab0210 [arXiv:1901.09977 [gr-qc]]. Kurganov:2000ovy A. Kurganov and E. Tadmor, J. Comput. Phys. 160 (2000), 241-282 doi:10.1006/jcph.2000.6459 Lasota:1995eu J. P. Lasota, P. Haensel and M. A. Abramowicz, Astrophys. J. 456 (1996), 300 doi:10.1086/176650 Lattimer:1991nc J. M. Lattimer and F. D. Swesty, Nucl. Phys. A 535 (1991), 331-376 doi:10.1016/0375-9474(91)90452-C Lehner:2011aa L. Lehner, C. Palenzuela, S. L. Liebling, C. Thompson and C. Hanna, Phys. Rev. D 86 (2012), 104035 doi:10.1103/PhysRevD.86.104035 [arXiv:1112.2622 [astro-ph.HE]]. LIGOScientific:2017ync B. P. Abbott et al. [LIGO Scientific, Virgo, Fermi GBM, INTEGRAL, IceCube, AstroSat Cadmium Zinc Telluride Imager Team, IPN, Insight-Hxmt, ANTARES, Swift, AGILE Team, 1M2H Team, Dark Energy Camera GW-EM, DES, DLT40, GRAWITA, Fermi-LAT, ATCA, ASKAP, Las Cumbres Observatory Group, OzGrav, DWF (Deeper Wider Faster Program), AST3, CAASTRO, VINROUGE, MASTER, J-GEM, GROWTH, JAGWAR, CaltechNRAO, TTU-NRAO, NuSTAR, Pan-STARRS, MAXI Team, TZAC Consortium, KU, Nordic Optical Telescope, ePESSTO, GROND, Texas Tech University, SALT Group, TOROS, BOOTES, MWA, CALET, IKI-GW Follow-up, H.E.S.S., LOFAR, LWA, HAWC, Pierre Auger, ALMA, Euro VLBI Team, Pi of Sky, Chandra Team at McGill University, DFN, ATLAS Telescopes, High Time Resolution Universe Survey, RIMAS, RATIR and SKA South Africa/MeerKAT], Astrophys. J. Lett. 848 (2017) no.2, L12 doi:10.3847/2041-8213/aa91c9 [arXiv:1710.05833 [astro-ph.HE]]. LIGOScientific:2017vwq B. P. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. Lett. 119 (2017) no.16, 161101 doi:10.1103/PhysRevLett.119.161101 [arXiv:1710.05832 [gr-qc]]. Liu:2008xy Y. T. Liu, S. L. Shapiro, Z. B. Etienne and K. Taniguchi, Phys. Rev. D 78 (2008), 024012 doi:10.1103/PhysRevD.78.024012 [arXiv:0803.4193 [astro-ph]]. Lorimer:2008se D. R. Lorimer, Living Rev. Rel. 11 (2008), 8 doi:10.12942/lrr-2008-8 [arXiv:0811.0762 [astro-ph]]. Mignone:2008ii A. Mignone, M. Ugliano and G. Bodo, Mon. Not. Roy. Astron. Soc. 393 (2009), 1141 doi:10.1111/j.1365-2966.2008.14221.x [arXiv:0811.1483 [astro-ph]]. Mignone:2020qec A. Mignone and L. Del Zanna, J. Comput. Phys. 424 (2021), 109748 doi:10.1016/j.jcp.2020.109748 [arXiv:2004.10542 [physics.comp-ph]]. Moffatt:1978 H. K. Moffatt , "Magnetic field generation in electrically conducting fluids", Cambridge Monographs on Mechanics and Applied Mathematics, Cambridge: University Press, 1978 Mooley:2018qfh K. P. Mooley, A. T. Deller, O. Gottlieb, E. Nakar, G. Hallinan, S. Bourke, D. A. Frail, A. Horesh, A. Corsi and K. Hotokezaka, Nature 561 (2018) no.7723, 355-359 doi:10.1038/s41586-018-0486-3 [arXiv:1806.09693 [astro-ph.HE]]. Musolino:2023edi C. Musolino, C. Ecker and L. Rezzolla, Astrophys. J. 962 (2024) no.1, 61 doi:10.3847/1538-4357/ad1758 [arXiv:2307.03225 [gr-qc]]. Most:2019kfe E. R. Most, L. J. Papenfort and L. Rezzolla, Mon. Not. Roy. Astron. Soc. 490 (2019) no.3, 3588-3600 doi:10.1093/mnras/stz2809 [arXiv:1907.10328 [astro-ph.HE]]. Most:2020ami E. R. Most and A. A. Philippov, Astrophys. J. Lett. 893 (2020) no.1, L6 doi:10.3847/2041-8213/ab8196 [arXiv:2001.06037 [astro-ph.HE]]. Most:2022ojl E. R. Most and A. A. Philippov, Mon. Not. Roy. Astron. Soc. 515 (2022) no.2, 2710-2724 doi:10.1093/mnras/stac1909 [arXiv:2205.09643 [astro-ph.HE]]. Most:2023sft E. R. Most and E. Quataert, Astrophys. J. Lett. 947 (2023) no.1, L15 doi:10.3847/2041-8213/acca84 [arXiv:2303.08062 [astro-ph.HE]]. Most:2023sme E. R. Most, Phys. Rev. D 108 (2023) no.12, 12 doi:10.1103/PhysRevD.108.123012 [arXiv:2311.03333 [astro-ph.HE]]. Mosta:2013gwu P. Mösta, B. C. Mundim, J. A. Faber, R. Haas, S. C. Noble, T. Bode, F. Löffler, C. D. Ott, C. Reisswig and E. Schnetter, Class. Quant. Grav. 31 (2014), 015005 doi:10.1088/0264-9381/31/1/015005 [arXiv:1304.5544 [gr-qc]]. Mosta:2020hlh P. Mösta, D. Radice, R. Haas, E. Schnetter and S. Bernuzzi, Astrophys. J. Lett. 901 (2020), L37 doi:10.3847/2041-8213/abb6ef [arXiv:2003.06043 [astro-ph.HE]]. Mueller:1996pm H. Mueller and B. D. Serot, Nucl. Phys. A 606 (1996), 508-537 doi:10.1016/0375-9474(96)00187-X [arXiv:nucl-th/9603037 [nucl-th]]. Neilsen:2014hha D. Neilsen, S. L. Liebling, M. Anderson, L. Lehner, E. O'Connor and C. Palenzuela, Phys. Rev. D 89 (2014) no.10, 104029 doi:10.1103/PhysRevD.89.104029 [arXiv:1403.3680 [gr-qc]]. OConnor:2009iuz E. O'Connor and C. D. Ott, Class. Quant. Grav. 27 (2010), 114103 doi:10.1088/0264-9381/27/11/114103 [arXiv:0912.2393 [astro-ph.HE]]. Palenzuela:2012my C. Palenzuela, Mon. Not. Roy. Astron. Soc. 431 (2013), 1853-1865 doi:10.1093/mnras/stt311 [arXiv:1212.0130 [astro-ph.HE]]. Palenzuela:2013hu C. Palenzuela, L. Lehner, M. Ponce, S. L. Liebling, M. Anderson, D. Neilsen and P. Motl, Phys. Rev. Lett. 111 (2013) no.6, 061105 doi:10.1103/PhysRevLett.111.061105 [arXiv:1301.7074 [gr-qc]]. Palenzuela:2013kra C. Palenzuela, L. Lehner, S. L. Liebling, M. Ponce, M. Anderson, D. Neilsen and P. Motl, Phys. Rev. D 88 (2013) no.4, 043011 doi:10.1103/PhysRevD.88.043011 [arXiv:1307.7372 [gr-qc]]. Palenzuela:2015dqa C. Palenzuela, S. L. Liebling, D. Neilsen, L. Lehner, O. L. Caballero, E. O'Connor and M. Anderson, Phys. Rev. D 92 (2015) no.4, 044045 doi:10.1103/PhysRevD.92.044045 [arXiv:1505.01607 [gr-qc]]. Palenzuela:2018sly C. Palenzuela, B. Miñano, D. Viganò, A. Arbona, C. Bona-Casas, A. Rigo, M. Bezares, C. Bona and J. Massó, Class. Quant. Grav. 35 (2018) no.18, 185007 doi:10.1088/1361-6382/aad7f6 [arXiv:1806.04182 [physics.comp-ph]]. Palenzuela:2021gdo C. Palenzuela, R. Aguilera-Miret, F. Carrasco, R. Ciolfi, J. V. Kalinani, W. Kastaun, B. Miñano and D. Viganò, Phys. Rev. D 106 (2022) no.2, 023013 doi:10.1103/PhysRevD.106.023013 [arXiv:2112.08413 [gr-qc]]. Palenzuela:2022kqk C. Palenzuela, S. Liebling and B. Miñano, Phys. Rev. D 105 (2022) no.10, 103020 doi:10.1103/PhysRevD.105.103020 [arXiv:2204.02721 [gr-qc]]. Price:2006fi D. Price and S. Rosswog, Science 312 (2006), 719 doi:10.1126/science.1125201 [arXiv:astro-ph/0603845 [astro-ph]]. Radice:2016dwd D. Radice, F. Galeazzi, J. Lippuner, L. F. Roberts, C. D. Ott and L. Rezzolla, Mon. Not. Roy. Astron. Soc. 460 (2016) no.3, 3255-3271 doi:10.1093/mnras/stw1227 [arXiv:1601.02426 [astro-ph.HE]]. Radice:2020ids D. Radice, Symmetry 12 (2020) no.8, 1249 doi:10.3390/sym12081249 [arXiv:2005.09002 [astro-ph.HE]]. Rasio:1999ku F. A. Rasio and S. L. Shapiro, Class. Quant. Grav. 16 (1999), R1-R29 doi:10.1088/0264-9381/16/6/201 [arXiv:gr-qc/9902019 [gr-qc]]. Read:2008iy J. S. Read, B. D. Lackey, B. J. Owen and J. L. Friedman, Phys. Rev. D 79 (2009), 124032 doi:10.1103/PhysRevD.79.124032 [arXiv:0812.2163 [astro-ph]]. Rezzolla:2011da L. Rezzolla, B. Giacomazzo, L. Baiotti, J. Granot, C. Kouveliotou and M. A. Aloy, Astrophys. J. Lett. 732 (2011), L6 doi:10.1088/2041-8205/732/1/L6 [arXiv:1101.4298 [astro-ph.HE]]. Rhoades:1974 C. E. Rhoades and R. Ruffini, Phys. Rev. Lett., 32 (1974) 324 Ruiz:2016rai M. Ruiz, R. N. Lang, V. Paschalidis and S. L. Shapiro, Astrophys. J. Lett. 824 (2016) no.1, L6 doi:10.3847/2041-8205/824/1/L6 [arXiv:1604.02455 [astro-ph.HE]]. Ruiz:2017inq M. Ruiz and S. L. Shapiro, Phys. Rev. D 96 (2017) no.8, 084063 doi:10.1103/PhysRevD.96.084063 [arXiv:1709.00414 [astro-ph.HE]]. Ruiz:2017due M. Ruiz, S. L. Shapiro and A. Tsokaros, Phys. Rev. D 97 (2018) no.2, 021501 doi:10.1103/PhysRevD.97.021501 [arXiv:1711.00473 [astro-ph.HE]]. Ruiz:2019ezy M. Ruiz, A. Tsokaros, V. Paschalidis and S. L. Shapiro, Phys. Rev. D 99 (2019) no.8, 084032 doi:10.1103/PhysRevD.99.084032 [arXiv:1902.08636 [astro-ph.HE]]. Ruiz:2020via M. Ruiz, A. Tsokaros and S. L. Shapiro, Phys. Rev. D 101 (2020) no.6, 064042 doi:10.1103/PhysRevD.101.064042 [arXiv:2001.09153 [astro-ph.HE]]. Ruiz:2021qmm M. Ruiz, A. Tsokaros and S. L. Shapiro, Phys. Rev. D 104 (2021) no.12, 124049 doi:10.1103/PhysRevD.104.124049 [arXiv:2110.11968 [astro-ph.HE]]. Savchenko:2017ffs V. Savchenko, C. Ferrigno, E. Kuulkers, A. Bazzano, E. Bozzo, S. Brandt, J. Chenevez, T. J. L. Courvoisier, R. Diehl and A. Domingo, et al. Astrophys. J. Lett. 848 (2017) no.2, L15 doi:10.3847/2041-8213/aa8f94 [arXiv:1710.05449 [astro-ph.HE]]. Sekiguchi:2015dma Y. Sekiguchi, K. Kiuchi, K. Kyutoku and M. Shibata, Phys. Rev. D 91 (2015) no.6, 064059 doi:10.1103/PhysRevD.91.064059 [arXiv:1502.06660 [astro-ph.HE]]. Sekiguchi:2016bjd Y. Sekiguchi, K. Kiuchi, K. Kyutoku, M. Shibata and K. Taniguchi, Phys. Rev. D 93 (2016) no.12, 124046 doi:10.1103/PhysRevD.93.124046 [arXiv:1603.01918 [astro-ph.HE]]. Shakura:1973 N. I. Shakura and R. A. Sunyaev, Astron. Astrophys. 24 (1973), 337 Shen:1998gq H. Shen, H. Toki, K. Oyamatsu and K. Sumiyoshi, Nucl. Phys. A 637 (1998), 435-450 doi:10.1016/S0375-9474(98)00236-X [arXiv:nucl-th/9805035 [nucl-th]]. Siegel:2013nrw D. M. Siegel, R. Ciolfi, A. I. Harte and L. Rezzolla, Phys. Rev. D 87 (2013) no.12, 121302 doi:10.1103/PhysRevD.87.121302 [arXiv:1302.4368 [gr-qc]]. Siegel:2017jug D. M. Siegel and B. D. Metzger, Astrophys. J. 858 (2018) no.1, 52 doi:10.3847/1538-4357/aabaec [arXiv:1711.00868 [astro-ph.HE]]. Siegel:2017nub D. M. Siegel and B. D. Metzger, Phys. Rev. Lett. 119 (2017) no.23, 231102 doi:10.1103/PhysRevLett.119.231102 [arXiv:1705.05473 [astro-ph.HE]]. Shibata:2005gp M. Shibata and Y. i. Sekiguchi, Phys. Rev. D 72 (2005), 044014 doi:10.1103/PhysRevD.72.044014 [arXiv:astro-ph/0507383 [astro-ph]]. Shibata:2005mz M. Shibata, M. D. Duez, Y. T. Liu, S. L. Shapiro and B. C. Stephens, Phys. Rev. Lett. 96 (2006), 031102 doi:10.1103/PhysRevLett.96.031102 [arXiv:astro-ph/0511142 [astro-ph]]. Shibata:2006nm M. Shibata and K. Taniguchi, Phys. Rev. D 73 (2006), 064027 doi:10.1103/PhysRevD.73.064027 [arXiv:astro-ph/0603145 [astro-ph]]. Shibata:2011fj M. Shibata, Y. Suwa, K. Kiuchi and K. Ioka, Astrophys. J. Lett. 734 (2011), L36 doi:10.1088/2041-8205/734/2/L36 [arXiv:1105.3302 [astro-ph.HE]]. Shibata:2019wef M. Shibata and K. Hotokezaka, Ann. Rev. Nucl. Part. Sci. 69 (2019), 41-64 doi:10.1146/annurev-nucl-101918-023625 [arXiv:1908.02350 [astro-ph.HE]]. Shibata:2021bbj M. Shibata, S. Fujibayashi and Y. Sekiguchi, Phys. Rev. D 103 (2021) no.4, 043022 doi:10.1103/PhysRevD.103.043022 [arXiv:2102.01346 [astro-ph.HE]]. Shibata:2021xmo M. Shibata, S. Fujibayashi and Y. Sekiguchi, Phys. Rev. D 104 (2021) no.6, 063026 doi:10.1103/PhysRevD.104.063026 [arXiv:2109.08732 [astro-ph.HE]]. Spruit:2001tz H. C. Spruit, Astron. Astrophys. 381 (2002), 923 doi:10.1051/0004-6361:20011465 [arXiv:astro-ph/0108207 [astro-ph]]. Steiner:2012rk A. W. Steiner, M. Hempel and T. Fischer, Astrophys. J. 774 (2013), 17 doi:10.1088/0004-637X/774/1/17 [arXiv:1207.2184 [astro-ph.SR]]. Stephens:2006cn B. C. Stephens, M. D. Duez, Y. T. Liu, S. L. Shapiro and M. Shibata, Class. Quant. Grav. 24 (2007), S207-S220 doi:10.1088/0264-9381/24/12/S14 [arXiv:gr-qc/0610103 [gr-qc]]. Sun:2022vri L. Sun, M. Ruiz, S. L. Shapiro and A. Tsokaros, Phys. Rev. D 105 (2022) no.10, 104028 doi:10.1103/PhysRevD.105.104028 [arXiv:2202.12901 [astro-ph.HE]]. Suresh:1997 A. Suresh and H. T. Huynh, J. Comp. Phys. 136 (1997), 83 Timmes:2000 F. X. Timmes, and F. D. Swesty, Astrophys. J. Suppl. 126 (2000), Issue 2, 501-516 Togashi:2017mjp H. Togashi, K. Nakazato, Y. Takehara, S. Yamamuro, H. Suzuki and M. Takano, Nucl. Phys. A 961 (2017), 78-105 doi:10.1016/j.nuclphysa.2017.02.010 [arXiv:1702.05324 [nucl-th]]. Tootle:2021umi S. D. Tootle, L. J. Papenfort, E. R. Most and L. Rezzolla, Astrophys. J. Lett. 922 (2021) no.1, L19 doi:10.3847/2041-8213/ac350d [arXiv:2109.00940 [gr-qc]]. White:2015omx C. J. White, J. M. Stone and C. F. Gammie, Astrophys. J. Suppl. 225 (2016) no.2, 22 doi:10.3847/0067-0049/225/2/22 [arXiv:1511.00943 [astro-ph.HE]]. Zrake:2013mra J. Zrake and A. I. MacFadyen, Astrophys. J. Lett. 769 (201sp3), L29 doi:10.1088/2041-8205/769/2/L29 [arXiv:1303.1450 [astro-ph.HE]].
http://arxiv.org/abs/2405.09705v1
20240515210623
The Realization of a Gas Puff Imaging System on the Wendelstein 7-X Stellarator
[ "J. L. Terry", "A. von Stechow", "S. G. Baek", "S. B. Ballinger", "O. Grulke", "C. von Sehren", "R. Laube", "C. Killer", "F. Scharmer", "K. J. Brunner", "J. Knauer", "S. Bois", "the W7-X Team" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
AIP/123-QED terry@psfc.mit.edu Massachusetts Institute of Technology - Plasma Science and Fusion Center, Cambridge, MA 02139, USA Max-Planck-Institut für Plasmaphysik, Greifswald, Germany Massachusetts Institute of Technology - Plasma Science and Fusion Center, Cambridge, MA 02139, USA Massachusetts Institute of Technology - Plasma Science and Fusion Center, Cambridge, MA 02139, USA [Also at ]Department of Physics, Technical University of Denmark, Lyngby, Denmark Max-Planck-Institut für Plasmaphysik, Greifswald, Germany Max-Planck-Institut für Plasmaphysik, Greifswald, Germany Max-Planck-Institut für Plasmaphysik, Greifswald, Germany Max-Planck-Institut für Plasmaphysik, Greifswald, Germany Max-Planck-Institut für Plasmaphysik, Greifswald, Germany Max-Planck-Institut für Plasmaphysik, Greifswald, Germany Max-Planck-Institut für Plasmaphysik, Greifswald, Germany Laboratoire de Physique des Plasmas, Ecole Polytechnique-CNRS-Univ Paris-Sud-UPMC, Palaiseau, France Max-Planck-Institut für Plasmaphysik, Greifswald, Germany APS/123-QED A footnote to the article title [Also at ]Massachusetts Institute of Technology - Plasma Science and Fusion Center Second.Author@institution.edu Authors' institution and/or address This line break forced with \\ MUSO Collaboration http://www.Second.institution.edu/ Charlie.Author Second institution and/or address This line break forced Third institution, the second for Charlie Author Authors' institution and/or address This line break forced with \\ A system for studying the spatio-temporal dynamics of fluctuations in the boundary of the W7-X plasma using the “Gas-Puff Imaging” (GPI) technique has been designed, constructed, installed, and operated. This GPI system addresses a number of challenges specific to long-pulse superconducting devices like W7-X, including the long distance between the plasma and the vacuum vessel wall, the long distance between the plasma and diagnostic ports, the range of last closed flux surface (LCFS) locations for different magnetic configurations in W7-X, and management of heat loads on the system's plasma-facing components. The system features a pair of “converging-diverging” nozzles for partially collimating the gas puffed locally ≈135 mm radially outboard of the plasma boundary, a pop-up turning mirror for viewing the gas puff emission from the side (which also acts as a shutter for the re-entrant vacuum window), and a high-throughput optical system that collects visible emission resulting from the interaction between the puffed gas and the plasma and directs it along a water-cooled re-entrant tube directly onto the 8 x 16 pixel detector array of the fast camera. The DEGAS 2 neutrals code was used to simulate the H_α (656 nm) and the HeI (587 nm) line emission expected from well-characterized gas-puffs of H_2 and He and excited within typical edge plasma profiles in W7-X, thereby predicting line brightnesses used to reduce the risks associated with system sensitivity and placement of the field of view. Operation of GPI on W7-X shows excellent signal to noise ratios (>100) over the field of view for minimally perturbing gas puffs. The GPI system provides detailed measurements of the 2-dimensional (radial and poloidal) dynamics of plasma fluctuations in the W7-X edge and scrape-off layer, and in and around the magnetic islands outside the LCFS that make up the island divertor configuration employed on W7-X. Gas-Puff Imaging on the W7-X Stellarator Delta Author May 20, 2024 ======================================== § INTRODUCTION The understanding of edge and scrape-off layer (SOL) transport in magnetically confined plasmas continues to be of crucial importance in the development of a viable fusion power plant. It affects critical aspects of energy and particle exhaust handling, e.g., divertor and first-wall lifetimes, and fuel recycling, as well as critical aspects of core plasma performance (via the pedestal) and fueling. It has long been recognized that turbulence is important to edge transport, and the past two decades have seen significant advances in our understanding of edge phenomena in tokamaks and stellarators. These advances have been made possible by the combination of advanced theoretical/computational models and detailed diagnostic information. In particular, a number of imaging diagnostic systems yielding 2-dimensional information have been developed for the visualization of turbulent dynamics, including Beam Emission Spectroscopy (BES) <cit.>, Electron Cyclotron Emission Imaging (ECEI) <cit.>, Microwave Imaging Reflectometry (MIR) <cit.>, and (most importantly in the context of this report) Gas-Puff-Imaging (GPI) <cit.>. GPI is a mature diagnostic technique which has been of particular importance for identifying the structure and dynamics of edge phenomena in tokamaks: Alcator C-Mod <cit.>, NSTX <cit.>, ASDEX-Upgrade <cit.>, TEXTOR <cit.>, EAST <cit.>, and TCV <cit.>, and in a reversed-field-pinch - RFX-Mod <cit.> - see review of GPI by Zweben, et al. <cit.>. This report describes the design, component characterization, and performance of a GPI system deployed on the Wendelstein 7-X (W7-X) stellarator <cit.>. GPI has proven to be valuable for understanding a number of key physics issues of the plasma boundary and the SOL in tokamaks, including but not limited to the following examples: the dynamics of highly intermittent fluctuations in the SOL, often referred to as filaments or blobs <cit.>, the dominant role of these filaments in the far-SOL, see e.g. <cit.>, the development of models for filament intermittency and filament dynamics <cit.>, dynamics during the L-H transition <cit.>, SOL dynamics during ELMs <cit.>, the role that filamentary radial transport plays in the tokamak density limit <cit.>, the nonlinear turbulent kinetic energy transfer from the background turbulence into sheared quasi-static flows <cit.>, zonal flows and geodesic acoustic modes <cit.>, and the characterization of coherent electromagnetic modes <cit.>. While filaments are typically present in the W7-X SOL, their transport and intermittency statistics are very different from what is found in tokamaks <cit.>. The presence of edge magnetic islands in W7-X appears to play a role in the filament dynamics, as does the field-line averaged curvature <cit.>. 2D imaging by GPI at the outboard island in W7-X will help to elucidate those dynamics and the effects of the islands on the filaments <cit.>. Regarding edge coherent modes in W7-X, a number have been observed <cit.>, and GPI will further characterize and investigate those modes. §.§ Key components of Gas-Puff Imaging The GPI technique is well-described in the GPI review publication <cit.>. We reiterate the essentials and key components of the technique here. A “gas puff" component provides a source of hydrogenic or helium atoms with a limited toroidal extent at the edge of the plasma. (A vertical “sheet" of atoms is the ideal.) As the atoms from the puff penetrate the edge plasma, line emission is excited by the local plasma electrons. Under conditions <cit.> typically met in the edge plasmas of stellarators and tokamaks, the line emission from H_α (when puffing H_2) or the 3^3D - 2^3P transition of HeI at 587 nm (when puffing He) will represent excitation by a unique n_e and T_e in the local plasma and therefore also the combined effects of fluctuations in those quantities. The line emission is collected from the region “illuminated" by the gas puff and conveyed, by the “light collection" component, to a fast-framing 2-dimensional detector, the “fast camera" component. The viewing geometry is important. Ideally, the viewing chords should be aligned with the magnetic field lines at the gas puff location since the fluctuations are primarily field-aligned and the most interesting viewing field is therefore perpendicular to them. Any toroidal extent of the emission from the gas cloud will limit the spatial resolution in the images compared to that obtainable using the ideal “sheet” illumination. While the GPI concept is relatively simple, the realization of an actual system on a magnetic-confinement fusion-relevant device like W7-X is quite challenging as will be discussed below. The CAD model of these GPI system components, as realized on W7-X, is shown in Figure <ref>. §.§ Challenges for GPI on W7-X W7-X is a large superconducting stellarator which provides plasma parameters and engineering demonstrations relevant to a future stellarator power plant. Its design was based on optimizing seven criteria, one of which is reduced neoclassical transport of the thermal plasma <cit.>. It is developing the capability to maintain plasma durations of thirty minutes, and plasmas of 8 minutes duration and over 1 GJ of input (and exhaust) energy have already been realized <cit.>. W7-X can accommodate a variety of magnetic configurations and utilizes the magnetic island divertor concept for heat and particle exhaust handling <cit.>. Elucidation of the islands' role in SOL transport is a primary goal for GPI investigations. These realities present significant challenges for implementing GPI on W7-X. As noted above, the “gas puff” component must be in-vessel and close to the plasma boundary. Ports with tangential views of the plasma that fulfill the requirement of field-aligned viewing chords are too small for the light collection requirements. Therefore, the “light collection" component must enter from one of the large horizontal ports and have parts relatively close to the plasma. These port flanges are roughly 2 m radially from the plasma edge in order to accommodate the coils and cryostat. Furthermore, the variety of magnetic configurations created in W7-X results in a range of last closed flux surface (LCFS) locations and island geometries that affect the decisions about the placements of the nozzle and field of view (FOV). We summarize the design challenges for GPI on W7-X below. We believe that each represents a challenge not faced by any current or previously existing GPI system. * The variety of available SOL and island locations in the W7-X configuration space requires a large distance (∼0.1 m) between the “gas puff” component and the field of view in most of those configurations. This, in turn, makes it highly desirable that the flux of atoms/molecules from the nozzle be somewhat collimated in the toroidal dimension in order to minimize the loss in spatial resolution due to smearing of field-aligned fluctuation features brought about by chordal integration through the gas cloud. * The variety of LCFS locations and island locations makes it desirable to have the capability to change the FOV. * Longer pulse/steady-state higher-power operation is planned for W7-X, and plasma radiation from those discharges will result in large heat loads on in-vessel diagnostics with plasma-facing components. These are required to be designed to withstand sustained radiative heat fluxes from the plasma boundary of up to 100 kW/m^2. For GPI, this means thoughtful heat load management for its in-vessel components, especially the vacuum window, plasma-facing components, and nozzle structure. * The heat load concerns precluded the use of in-vessel optical fibers that would transmit light gathered from imaging optics mounted (say) on the vessel wall to the ex-vessel “fast camera” component. This required that re-entrant tubes of ∼2 m radial extent be used to provide the view of the gas puff and transmit the image to the “fast camera”. In addition to the challenges specific to W7-X, there are challenges that must be overcome for any successful GPI realization, i.e.: * Puff enough gas and gather enough light on turbulence timescales (∼1 μ s) for good signal-to-noise in the images. * Image with a spatial resolution good enough to discern the turbulence spatial scales of greatest interest. This includes minimizing the toroidal spread of the gas puff at the FOV of the light collection component as well as choosing the spot size of a detector pixel in the object plane of the collecting optics, i.e. in the focal plane in front of the “gas puff” component/nozzles. * Do not puff gas in amounts that will significantly perturb either the global plasma or the n_e and T_e of the region whose turbulence characteristics are being measured. §.§ Design criteria for Gas-Puff Imaging on W7-X The challenges listed in Section <ref> formed the basis for the design criteria of the system that was ultimately realized. Those criteria are summarized in Table <ref>, along with the final design decisions. § REALIZATION OF GPI ON W7-X In this Section, we describe the details of the various GPI components as realized on W7-X and provide, in some cases, reasons for the design choices. §.§ The fast camera In order to maximize the chances of satisfying the crucial criterion of good signal-to-noise in the images with minimal plasma perturbation by the gas puff, we chose to image the collected light directly onto detectors with optimal sensitivity and optimal signal-to-noise at low light incidence. Avalanche photodiodes (APDs) fulfill these sensitivity criteria <cit.>. Dynamic range flexibility is provided by changing the high-voltage bias applied to APDs. A camera that utilizes arrays of APDs and provides on-board high-voltage bias control, adequate magnetic field shielding (up to 100 mT), onboard Peltier cooling of the APD arrays, and onboard 10 Gbit optical Ethernet data-transfer capability is commercially available from Fusion Instruments, Kft <cit.>. The Fusion Instruments APDCAM-10G camera was provided with a custom APD array of 8×16 pixels, utilizing four Hamamatsu S8550 4×8 pixel array chips <cit.> in the 8x16 configuration shown in Figure <ref>. All APD pixels are covered by plastic microlenses that increase the effective packing fraction from ∼50% to ∼90%, make the effective pixel size 2.3×2.3 mm, and extend the full detector area to a 20.8×38.3 mm rectangle. The noise on the 14-bit (16384 counts maximum) signals is ≃ 25-30 counts at the controlled detector temperature of 25^∘ C. The camera can be read out to the data archive at frame rates of up to 4 Msamples/s, although 2 Msamples/s has been the typical frame rate. A 50×50 mm-square bandpass interference filter is mounted 14 mm in front of the detector plane. Special high transmission (97-99%) filters <cit.> are used (for H_α: center-λ=657 nm with 5.2 nm passband, for He I: center-λ=587.4 nm with 4.4 nm passband). Figure <ref> also shows the five high-brightness LEDs and two 1-mm fibers that are located in the detector plane. These are used (during periods of in-vessel access) as light sources for back-illumination through the collection optics onto a target at the toroidal angle of the gas-puffing nozzles that defines the “FOV registration plane”. The next subsection describes how this back-illumination procedure achieves an accurate registration of the FOV in W7-X vessel coordinates. The two fibers can also be used to couple collected light to spectrometers if visible spectra from the FOV region are desired. §.§ The field of view The usefulness of GPI is limited to plasma regions where there is enough atomic line emission from the puffed gas to allow imaging at the appropriate timescales (∼1 μ s). In W7-X, this means GPI is limited to the SOL and regions just inside the LCFS, hence the second constraint listed in Table <ref> that the FOV center be within 0 mm and 60 mm radially out from the LCFS. Following the choice of detector in the fast camera (pixel spacing ≃2.3 mm) and a desired spatial resolution (∼5 mm) compatible with the cross-field size scale expected of the SOL filamentary structures (∼10 mm) <cit.>, the demagnification of the collection optics was chosen to be ∼0.5 (actual value is 0.510 - see Section <ref>). The size of the FOV in the object plane of the collection optics is thus 41 x 75 mm. The toroidal location of the GPI “FOV registration plane” is in front of the nozzle. The availability of horizontal ports and mounting choices along with the last three constraints listed in Table <ref> determined the toroidal location of the nozzle at 283.7^∘, i.e. 13.7^∘ clockwise from the “bean” cross-section in the 5th of W7-X's five-fold symmetric modules (M5). The cross-section of the “standard” magnetic field configuration at that toroidal angle is shown in Figure <ref>. The FOV-positioning decisions were made in this configuration because it is the one most frequently produced on W7-X. As can be seen in Figure <ref>, two GPI fields-of-view can be selected for any given experimental run-day by rotating the camera by 90^∘ about the optical axis, a feature built into the camera-mounting apparatus. In the “default” FOV orientation, the long dimension is tangential to the local flux surfaces and thus in the poloidal direction, yielding 8 radial columns of pixel views, each with 16 poloidal views. Alternatively, rotation of the fast camera about the optical axis into the “rotated” orientation provides 16 radial columns with 8 poloidal views each, covering a greater radial range and including views inside the LCFS. The camera rotations are reproducibly achieved in ∼30 minutes of work at the camera location. The “default” FOV, shown in red in Figure <ref>, includes an island O-point in the “standard” configuration (one of the design criteria) and is entirely in the SOL. The choice of its radial location was also informed by (time-independent) modeling of emission resulting from the interaction of the SOL plasma with the gas puff (see Section <ref>). The FOV was placed so that the radial location of the peak line emission predicted by the modeling was just inside the outer edge of the FOV under “typical” SOL conditions. Accurate registration of the FOV is achieved by mounting a target onto the nozzle structure whose location is known in W7-X vessel coordinates from in-vessel metrology. This target defines the “FOV registration plane” which is orthogonal to the front face of the nozzle structure and is essentially in the ϕ=274^∘ (R,Z) plane. The geometry is illustrated in Figure <ref>. The fast-camera LEDs and fibers (see Figure <ref>) are back-illuminated through the collection optics and imaged onto the target by the optics. Since the pixel positions relative to the LEDs/fibers are known, the locations viewed by the camera pixels are accurately registered in W7-X coordinates, thereby defining the FOV in this plane. In the “standard” configuration, the radial/normal-to-flux-surface extent of the “default” FOV is from 5 mm to 44 mm outside of the LCFS. For this and the other common W7-X magnetic configurations, the FOV (relative to the LCFS) are listed in Table <ref>, where it can be seen that there is good coverage in all but the “high iota” and “high mirror” configurations, for which the emission in the outer views is weak because they are so far out in the SOL. §.§ The turning mirror, shutter, and vacuum window The gas puff emission is viewed from the horizontal port adjacent to the one holding the nozzle structure (Figure <ref>). The views are designed to be as close to B-field-aligned as is feasible. We set as a constraint (the first in Table <ref>) that the sight lines be within 15^∘ of the field lines local to the gas puff. A turning mirror is required to fulfill this requirement, which we decided to combine with the need to shutter the vacuum window of the re-entrant tube holding the collection optics. We mounted a polished stainless steel mirror (flatness of 1 λ at 630 nm and 60/40 scratch dig) on the back side of a hinged shutter plate and control the opening/closing of the shutter and mirror with a pneumatically actuated linear motion vacuum feed-through. The full-open angle of the shutter sets the location of the FOV, so it is critical that hard stops provide a very reproducible “open” condition. Repeated testing showed that the views in the “FOV registration plane” were reproducible to within ≤±1 mm. The desire to field-align the sight lines needed to be balanced with the need to minimize/eliminate interaction between the shutter and the plasma, including fast ions from the neutral beams. This compromise resulted in the following: 1) we placed the center of the open turning mirror at W7-X coordinates [R,ϕ,Z]=[6363 mm,4.900 rad,-210 mm]; 2) angles between the sight lines and the B-field local to the gas puff are therefore from 8 to 11^∘; 3) since we were free to choose the vertical location of the turning mirror, the sight lines and the local field lines piercing the “FOV registration plane” are at essentially the same angle (±2^∘) relative to horizontal planes; 4) the shutter is fully recessed into the port extension when closed; and 5) the part of the open shutter closest to the plasma is ∼115 mm from the LCFS of the largest of the most common magnetic configurations (i.e. the “low-iota” configuration). Reciprocating probe measurements have shown that the non-radiative power fluxes at locations >100 mm outside the LCFS surface are negligible <cit.>. Nonetheless, radiative heat fluxes on the plasma-facing shutter of up to 100 kW/m^2 had to be designed for, an issue discussed in Section <ref>. The 98 mm clear-aperture vacuum window in the re-entrant tube is a critical component for machine safety. The two main threats to the window integrity are radiative heating by the plasma and stray ECRH power. These risks were mitigated by recessing the window by ∼660 mm up the water-cooled re-entrant tube, whose inner diameter is 100 mm up to that window location. The fused silica window has a broadband anti-reflection coating and is rated for temperatures up to 200^∘C and for heating rates up to 25^∘C/min. <cit.>. §.§ The collection optics In addition to the turning mirror, three lenses in the re-entrant tube on the atmosphere side of the vacuum window make up the “collection optics”. They collect light from sight lines passing through the region in front of the nozzles, transmit it up the re-entrant tube and focus it onto the detectors. It was a primary design priority to collect as much light as was feasible, given the constraints of the size and availability of windows and lenses. This is accomplished using three 100 mm diameter plano-convex glass lenses with broadband anti-reflection coatings on all surfaces. The optical design is shown in Figure <ref> and is summarized here: the front lens, L1, is located just behind the vacuum window and renders light from the object plane parallel so that it passes down the tube. As illustrated in Figure <ref>, the object plane intersects the vertical “FOV registration plane” (normal to the nozzle face) at their respective center points. The “FOV registration plane” is essentially the ϕ=274^∘ (R,Z) plane, while the object plane is rotated by 20^∘ about a vertical axis and by 4.2^∘ about a horizontal axis with respect to that plane. In other words, the optical axis strikes the “FOV registration plane” at a downward angle of 4.2 deg, while the total angle between the optical axis and the surface normal of this plane is 20^∘. The focal length of L1 is 1475 mm, equal to the distance from L1 to the “FOV registration plane” along the optical axis. Lenses L2 & L3 are placed together ∼665 mm beyond L1 and focus the parallel light onto the detector plane with an effective focal length of 753 mm, yielding a demagnification of 0.510. The 20^∘ angle between the “object plane” and the “FoV registration plane” results in some differences in the optical properties across the FOV. The (geometrical) optics imaging properties of the system have been obtained using the ZEMAX-OpticStudio <cit.> commercial software. The modeled images of points in the detector plane back-imaged into the “FoV registration plane” have diameters (90% enclosed energy) ranging from ≈ 0.08 mm to ≈0.3 mm, with the largest spot sizes occurring at largest and smallest R_maj locations in the FOV. However, this imaging only increases the FWHM of back-imaged pixels from a perfectly-imaged 4.5 mm to 4.6 mm (minimum) and 4.7 mm (maximum) respectively. Thus, we will treat the optical resolution in the “FoV registration plane” as ≈4.65 mm. Thus 2.3 × 2.3 mm pixels collect essentially all of their incident light from 4.65 × 4.65 mm areas in the “FoV registration plane”. The aperture stop in the system is the 96 mm diameter clear aperture of the L1 lens mount. The etendue for an on-axis pixel is 6.5×10^-2 mm^2 ster. Light from off-axis points is vignetted because it is at a small angle with respect to the central ray on being rendered parallel by L1, yielding an etendue for the most off-axis pixels (the 4 corner pixels) of 4.7×10^-2 mm^2 ster. The lenses are secured in the re-entrant tube using a removable stiff frame of angled bars that is secured at each end. This can be seen in the cut-away rendering in Figure <ref>. §.§ The gas puffing component The gas puffing component of the system consists of the nozzles, the nozzles' housing, a capillary feed line, a gas vacuum feed-through, an injection valve, and a gas control system. The first four of these items are shown in Figure <ref>. The gas control system (not shown in Figure <ref>) is mounted on a panel and connects to the injection valve through a ∼1 m long flexible stainless steel (SSTL) hose. We wanted to locate the nozzle structure as close to the plasma as feasible while avoiding direct heating by plasma conduction and convection. As noted in the previous subsection, reciprocating probe measurements showed that conducted/convected power was quite small at locations ⪆ 100 mm into the outboard SOL. Considering the range of LCFS locations of the available W7-X magnetic configurations, we placed the front face of the nozzle housing ≈95 mm radially outward from the LCFS of the outermost of the more common configurations. This places the nozzles, which are recessed 2 mm into the housing, ≈135 mm from the LCFS in the “standard” configuration and ≈110 mm from the FOV center. The vertical center of the nozzles was set to align with the FOV center. Its toroidal location was set by mounting it onto the water-cooled port protection liner, thereby minimally affecting viewing access by other diagnostics utilizing that port. The in-vessel metrology activity mentioned in Section <ref> registered the location of the nozzle housing front face in W7-X plasma vessel coordinates at [R,ϕ,Z]=[6156.4 mm,4.7856 rad,-267.7 mm]. We surrounded the SSTL nozzle body in a graphite housing made from the same material that is used for the W7-X divertor targets. A short section of 1 mm I.D. capillary that couples to the nozzle body via a press fit exits radially out the back of the housing and connects with a 1.9 m length of 1 mm capillary that runs along the side of the port liner to a vacuum feedthrough mounted on the port extension close to the port flange. The puff valve is mounted as close as possible to the feedthrough as part of the effort to minimize the volume between the nozzles and the puff valve since that volume of gas (V_min=3.02 ml) must necessarily flow into W7-X. Thus the minimum gas load on W7-X is p_o× V_min, where p_o is the plenum pressure backing the puff valve. The gas control system performs several functions in addition to opening the puff valve at programmed times and durations, which it can do up to four times during a single W7-X shot/program. The gas control hardware consists of a plenum volume, an absolute pressure gauge to measure the pressure in the plenum, a differential pressure gauge to measure the rate of change in the pressure occurring during each gas puff, valves to access or isolate various parts of the system, and a mechanical pump to evacuate the system into the W7-X exhaust line, when necessary. A schematic of the GPI gas control system is presented in Figure <ref>. The timing for the opening and closing of the valves, as well as the monitoring and digitizing of the system pressures, is done by a programmed Red Pitaya's FPGAs/CPU/digitizers. It is mounted on the gas control panel and is interacted with using a Python graphical user interface program running on a network computer. The plenum pressure is manipulated by the valves V5 and V4 connecting it with either the regulated gas bottle or the mechanical pump while feeding back on the pressure of the absolute pressure gauge, AG <cit.>. Measurements of the gas puff flow rate and total amount puffed are accomplished by digitizing the differential pressure from gauge DG <cit.> after closing the normally-open valve V3 for the duration of the gas puffs. With V3 closed, the time history of the pressure difference between the pre-puff pressure and the plenum pressure is measured, and the time derivative of that differential pressure times the plenum volume (V_plenum=0.802 l) provides the gas flow rate out of the plenum. The pressure range of the differential gauge is 0 to 39 mbar, with an estimated error of <1% for typical pressures. Thus the maximum measureable amount of gas input into a single W7-X plasma is limited to 31 mbar l, which is greater than the typical amount delivered in four puffs, ∼12 mbar l or ∼3 mbar l per puff. §.§.§ Nozzle design and gas puff flow rate measurements Having defined and registered the locations of the FOV and the nozzles, the radial distance between the nozzles and the center of the FOV is 110 mm, satisfying one of the design criteria listed in Table <ref>. However, that large distance made improved collimation of the gas cloud a high priority. Another factor in choosing the nozzle design was the requirement of minimizing the perturbation of the plasma by the gas puff, which is essentially a constraint on the gas flow rate. In this subsection, we will present 1) the details of the “converging-diverging” (C-D) nozzle design, which was driven by the collimation and perturbation constraints, 2) its fabrication, and 3) the measured flow rates from the nozzles and comparison to predictions for C-D nozzles. We characterize the degree of collimation using the half angle of the gas cloud cone as it expands from a single nozzle into the relative vacuum, defined here as α_1/2= tan^-1(HWHM/Δ), where HWHM is the half-width at half-maximum of the gas pressure distribution in a normal plane at a distance Δ from the nozzle. The types of nozzles we considered were: unshaped 1 mm diameter capillary tubes, de Laval nozzles providing, in theory, optimal collimation <cit.>, and C-D nozzles <cit.> with good collimation. The de Laval shape was too difficult to fabricate at the size estimated for the desired flow rate. C-D nozzles were difficult, but possible, to fabricate. Unshaped capillaries were a last resort since our only guidance was a measurement from Ref. <cit.> of α_1/2≈25^∘ for D_2 at 600 mbar backing pressure, and we desired better collimation than that. We chose to use two nozzles, each with a C-D shape, separated vertically by 35 mm. The two nozzles with that separation were necessary in order to properly “illuminate” the full poloidal FOV in the “default” camera orientation. For the C-D nozzle, one specifies the converging angle of the nozzle shape on the high gas pressure side, the diverging angle on the outflow side, and crucially the “throat” diameter where the cones meet. Under isentropic conditions and with a high enough pressure difference between the input and output sides, Mach 1 flow is achieved at the throat with a flow rate there that is given by <cit.> Ṅ_̇ṗ=n^throat v^throat A^throat=C (γ) p_o/kT_o√(γ R T_o) A^throat where Ṅ_̇ṗ is the gas flow rate [#/s], n^throat is the gas particle density (at the throat), v^throat is the Mach 1 speed, A^throat is the throat area, C(γ)=0.58 for H_2 (and =0.57 for He) is a constant that relates the gas density and temperature at the throat to the gas density and temperature of the backing reservoir and depends only on γ, the ratio of specific heats for the specific gas (1.41 for H_2 and 1.67 for He), p_o is the backing reservoir pressure in Pa, T_o is the reservoir gas temperature in K, k is Boltzmann’s constant, and R is the ideal gas constant for that gas, e.g. 4124 J/kg/K for H_2. Measurements of α_1/2 on a C-D nozzle with a 350 μm throat diameter <cit.> showed that α_1/2 decreased significantly as the backing pressure increased over the range of interest (∼0.4 to ∼1.3 bar). Since the flow rate is proportional to the backing pressure, we had to balance the improved collimation at the higher backing pressures with the risk of perturbing the W7-X plasma. According to Eq. <ref>, the throat diameter of the C-D nozzle design and the backing pressure determine the gas flow rate. The maximum acceptable gas flow rates were determined by modeling a hypothetical gas puff that perturbs the W7-X global plasma density by 10%. This modeling therefore informs the choice of throat diameter for the C-D nozzle shape. We modeled the total electron inventory in the W7-X plasma, N_e^tot, as being sustained by a recycling source, Φ_rec, that is proportional to the total inventory N_e^tot and balanced by a sink defined by an effective particle confinement time, τ_p. We defined another source, Φ_ext(t), with time-dependence similar to that predicted from finite-element analysis of gas transport occurring during a 50 ms opening of the puff valve to our plenum and accounting for flow through the 1.9 m long feed capillary to two C-D nozzles with throat diameters of roughly 100 μm. The time-profile of the external source was a ∼ 40 ms rise time, a 50 ms flattop, and a 0.5 s exponential decay. This external puff was assumed to fuel the plasma with an efficiency, ϵ_fuel, of 0.8, as will be discussed below. The model equation was thus: N_e^tot/dt=Φ_rec + ϵ_fuel 2 Φ_ext(t) - N_e^tot/τ_p τ_p was taken to be 300 ms (using Fig. 8 from Ref. <cit.>). Taking N_e^tot=V_plasma C n_el, where V_plasma is the plasma volume (≈30 l), n_el is the line-integrated density, and C is the ratio between the volume-averaged and line-integrated density (≈0.8 m^-1), Φ_rec is evaluated using the measured n_el with Φ_ext=0. We note that using essentially the same model and actual W7-X discharges (produced in 2018) into which He gas was puffed using a divertor gas puff system, Ref. <cit.> evaluated τ_p as (0.258±0.124 s) and the fueling efficiency of those divertor puffs to be in the range of 0.3 to 0.44. Using other 2018 discharges with n_el values of 2.2×10^19, 5.5×10^19, and 9×10^19 m^-2, we modeled the response to an external gas puff with the time-profile described above and the conservative guess for ϵ_fuel of 0.8. The peak flow rates that yielded a 10% increase in the n_el's of those three modeled discharges are listed in Table <ref>. Also shown there are the backing pressures that according to Eq. <ref> would provide those rates through two C-D nozzles, each with a 70 μm throat diameter. A 70 μm throat diameter was the smallest that could reasonably be fabricated. We found that density increases of ∼10% were produced for peak puff flow rates of H_2 molecules (or He atoms) out of each of two nozzles equal to 4×10^19, 14×10^19, and 30×10^19 #/s, respectively. According to Eq. <ref>, these flow rates of H_2 molecules will be realized for C-D nozzles with a 70 μm throat diameter at backing pressures of 34 kPa (0.34 bar), 117 kPa (1.17 bar), and 250 kPa (2.5 bar), respectively. Thus, we specified a 70 μm throat diameter. The half-angle for the diverging cone was specified as 25^∘, since that was seen to be in the optimal range in Ref. <cit.>. A 45^∘ half-angle was chosen for the converging cone. The two nozzles were cut into a trough milled down to 1 mm thickness in a SSTL blank with an overall thickness of 3 mm. Two vendors willing to attempt fabrication to those specifications were identified. One vendor used fine drills with tips cut at the desired angles; the other used a laser to sculpt the shape by varying the focal spot. Both fabrications were to be electro-polished to a surface finish with a Roughness Average Ra better than 0.4 μm. Examination of results with a microscope showed that the mechanical drilling produced far superior results. The cone walls were smooth, and the throat was well-defined, albeit with a diameter of ≈83 μm, larger than the specification. The walls of laser-drilled nozzles were rippled; the throat hole was ragged, and this fabrication was not used. The nozzle plate was then vacuum welded onto a back cover plate that holds a press-fit length of 1 mm I.D. capillary protruding out of the backside of the back plate. The assembly was vacuum leak-tested by pumping on the capillary after temporarily blocking the nozzle holes. An exploded view of the nozzle structure is shown in Figure <ref> and an assembled rendering is shown in Figure <ref>. < g r a p h i c s > Flow rate measurements of four gases as a function of the measured plenum pressure. The dashed lines are the calculated flow rates using Eq.<ref> and assuming a 93 μm diameter throat for each of the two nozzles. The solid circles are from puffs into W7-X, while the +'s and 's are from lab measurements. The flow rates of H_2 and He, the gases of interest for GPI, as well as N_2 and Ar were measured multiple times over 4 years using the gas control system. The 1.9 m length of feed capillary was present in the system for these measurements. The results are shown in Figure <ref>. For one set of H_2, He, and N_2 measurements, the puffs were into the W7-X vacuum vessel with the gate valves to the pumps closed and with the vessel pressure measured by a calibrated ASDEX-Upgrade Baratron gauge. Comparisons of the total amount of gas puffed during each sequence at each plenum pressure for each gas showed that (Δ p_GPI× V_plenum) and (Δ p_W7-X× V_W7-X) were the same to within 5%. This provided independent confirmation of the accuracy of the GPI differential pressure measurements. Regarding the flow rate measurements, we note the following: * The flow rates are linear with backing pressure as predicted by Eq. <ref>. * The flow rates are ≈25% larger than what is predicted by Eq. <ref> using the measured throat diameter. We do not know the reason(s) for this. Note that, in the figure, the 93 μm throat diameter was chosen in order to provide a good match to the experimental flow rates. * The scaling among the 4 different gases is approximately what is predicted by Eq. <ref>. It is good for H_2 and He, but somewhat worse for N_2 and Ar relative to H_2 and He. §.§.§ Measurements of gas cloud extent As noted in the preceding subsection, we had reason to believe that C-D shaped nozzles would provide some degree of collimation of the outflowing gas, and that it should improve with increasing plenum pressure <cit.>. We chose to measure this for the nozzles as fabricated. In this subsection, we present the measurements of the spatial distributions of the gas emerging from the nozzles. These distributions were measured using the vacuum chamber and the computer-controlled positioning system capable of scanning a probe in a 2D plane of the VINETA II linear plasma device <cit.> at IPP. By mounting a custom miniaturized hot filament ionization gauge onto VINETA's positioning system, we measured the spatial distributions of the local pressure within the puffed gas cloud as functions of gas backing pressure. The vacuum chamber of VINETA II is roughly cylindrical, ∼1.6 m long and ∼0.97 m in diameter, with a typical base pressure of ∼2×10^-4 Pa. The custom ionization gauge is based on a design from Ref. <cit.>. It consists of three tips protruding ∼3 mm from channels in a long 5 mm diameter ceramic cylinder. One tip is a U-shaped loop of tungsten wire that is heated to thermionic emission and acts as an electron source (the cathode at V=0, i.e., ground) for a second probe (the anode) that is biased (at +90 V) to draw ∼1 mA of electron current. The third probe (the collector) is biased (at -60 V) to collect ions of the gas local to the probe tips that have been ionized by the cathode-to-anode electrons. The anode and collector are separated by ≈1 mm. The collector current is amplified and digitized. The electrical circuit is shown in Figure <ref>. The measured ion currents are small (0 to 0.25 μA), non-linear (but monotonic) functions of the gas density at the probe, and dependent upon the gas species. Calibrations of the ion current response to the pressure of each puffed-gas species were performed by introducing increasing amounts of gas into VINETA II via a needle valve and logging the absolute pressure measured by a Baratron capacitive pressure gauge along with the measured ion current. The calibration range covered the pressure range encountered in the measurements on the GPI puffs into VINETA II (from about 0.005 to 0.5 Pa), and calibrations were performed for each gas species and after each vacuum break. An example of a calibration curve for H_2 is shown in the inset of Figure <ref>. The miniaturized ion gauge probe was scanned spatially within planes that were 54 and 102 mm in front of and normal to the 2-nozzle plate. Additionally, time histories at a single location 54 mm directly in front of the top nozzle for puffs with different valve-open durations were measured, shown in Figure <ref>. The rise time is ≈ 40 ms, and the decay time is ≈ 350 ms. The long decay time is consistent with our finite-element modeling of the gas transport through the system, and indicative of the gas in the volume between the valve and the nozzles (V_min) draining out of the nozzles after the valve is closed. Also evident in Figure <ref> are the “flattop” durations that roughly match the valve-open durations ≥0.1 s. 2D time-resolved pressure measurements were made for He puffs at eight backing pressures in the 54 mm plane and at 3 backing pressures in the 102 mm plane, and for H_2 puffs at two backing pressures in the 54 mm plane. For each gas puff, the probe responded to the rapid direct pressure rise within the gas cloud as well as the less swift “base pressure” rise in VINETA II. The latter was therefore measured with the probe well away from the nozzle during the puff and subtracted from the overall signal at each measurement location within the gas cloud. Full 2D scans showed left-right (toroidal) and up-down (vertical) symmetry of the gas distribution around the center of the nozzle structure. An example of a pressure distribution along a cut in the toroidal dimension at the height of the top nozzle is shown in Figure <ref>. The distribution is well described by a Lorentzian whose HWHM is equal to α_1/2^ϕ. The pressure distribution along a cut in the vertical dimension centered at the peak of the pressure in the toroidal dimension is shown in Figure <ref>. It is well described by two Lorentzian distributions separated by 35 mm with equal HWHMs (=α_1/2^Z). The results for the α_1/2^ϕ evaluations vs. backing pressure are summarized in Figure <ref>. The measurements were evaluated during the puff “flattop”, during which time the α_1/2s are smallest. We plot only α_1/2^ϕ since they are the ones that matter for constraining the spatial resolution, but note that the α_1/2^Z values were very close to the α_1/2^ϕ values. We note the following: * The degree of collimation improves with increasing backing pressure, as was also observed in <cit.>. The minimum α_1/2 appears to be ≈12^∘. * Also plotted are the half angles measured for the gas distribution from a single unshaped capillary with a 1 mm I.D. It is apparent that our C-D nozzles improve the collimation by roughly a factor of 2. * The half angles in the plane 54 mm from the nozzles are essentially the same as those in the plane at 102 mm. Recall that the center of the FOV is in the plane 110 mm away from the front surface of the nozzle plate. §.§ Heat-load management We placed the diagnostic's plasma-facing components in locations where heat loading due to plasma conduction and convection was likely to be small. However, radiative loads on those components cannot be avoided, and, as noted in the introduction, all in-vessel components on W7-X must be designed to withstand long radiative emission from the plasma boundary of up to 100 kW/m^2. W7-X has infrastructure for extensive water cooling that we used for active cooling of GPI's re-entrant tube. In addition, the nozzle structure was mounted to the water-cooled port liner assembly. Since the nozzle housing is made from the same graphite that is used for the W7-X divertor tiles and the nozzles are SSTL, we depended upon heat conduction to the port liner and radiative loss to keep the peak finite-element modeled temperature of the nozzle structure ≤700 ^∘C when subjected to a 100 kW/m^2 steady-state power flux emanating from the plasma. Active cooling of the re-entrant tube was provided by 12 mm I.D. tubes that are welded to the outside of the re-entrant tube and carry flowing water. There are six channels distributed around the larger diameter tube that holds the optics, and four channels distributed around the 100 mm I.D. tube on which the turning mirror/shutter is mounted. These two cooling circuits are independently fed by the W7-X cooling water at pressures of ∼5 bar and a temperature of ∼20 ^∘C. Tests and modeling of the six-channel circuit showed that the realized flow rates (∼1.5 m/s) in the different channels are similar, with a full exchange of water every 2-3 s. The cooling tubes run lengthwise and are combined in a collector ring around the vacuum window position on the larger tube and in a “cold foot” on the smaller tube to which the mirror/shutter attaches. The collector ring provides additional local cooling to the window, which is a critical machine safety component. Modeling shows that a 100 kW/m^2 steady-state power flux emanating from the plasma and intercepted by the re-entrant tube in the actual geometry increases the water temperature in the circuit by ∼5 ^∘C for an assumed flow rate of 2 m/s. The main design difficulty arises from the fact that the component intercepting the most heat flux, i.e., the movable shutter/turning mirror, could not be actively cooled. The design solution for keeping the shutter/mirror within an acceptable temperature bound was to use flexible “straps” that improve the thermal conductivity between the movable shutter and the water-cooled “cold foot” described above. The flexible “straps" are made of stacks of 425 very thin (25 μm) Cu foils <cit.>. A labeled photo of the shutter/mirror component is shown in Figure <ref>. The “front” ends of each of the two straps are coupled to a Cu plate sandwiched between the SSTL mirror on the inside of the shutter and the SSTL plasma-facing side of the shutter. Sigraflex pads <cit.> (thin, compressible, high-vacuum-compatible graphite pads with good thermal conduction) are used at each coupling interface. The measured thermal conductivity from the sandwiched Cu plate surface to the cold foot was ≈1.0 W/K. With the straps, the finite-element modeled response of the shutter temperature to plasma boundary emission of 100 kW/m^2 yields the temperature distribution on the shutter assembly shown in Figure <ref>, where the shield covering the straps (not shown in Figure <ref>) reaches a peak steady-state value of ≈700 ^∘C, and the plasma-facing surface of the shutter reaches ≈500 ^∘C in the modeling. The straps reduce the steady-state temperatures by 20-30% at high power loading but result in a much greater reduction through heat conduction at lower loads where the radiative loss from the components is small. In order to assess the actual temperatures on the shutter and on the vacuum window, thermocouples were placed on the front face of the shutter and on the quartz window surface just inside the window edge. These temperatures are monitored at all times during W7-X operation. § MODELING OF THE EXPECTED LINE EMISSION AND DETECTED SIGNAL It was important to model the expected interaction between the W7-X edge plasma and the gas puff to reduce the risk that we would not collect enough light to perform the imaging and to guide the radial placement of the FOV. The modeling tools that were used were the 3D Monte-Carlo neutrals code DEGAS 2 <cit.> and the 1D kinetic neutrals code KN1D <cit.>. Both codes compute steady-state solutions. DEGAS 2 uses user-input profiles of T_e and n_e with Monte-Carlo neutrals (H_2 or He) launched in a user-defined geometry and calculates, among other things, the 3D emissivities of H_α or He I - 587 nm line radiation. A viewing geometry is also input, and chord brightnesses in that geometry are computed from the emissivities. Even though DEGAS 2 is 3D for the neutrals and the emissivities, it uses an axisymmetric target plasma. To adapt this constraint to the non-axisymmetric W7-X geometry, we assumed that the relevant GPI region was small enough it could be approximated by a greatly expanded axisymmetrtic Alcator C-Mod equilibrium, whose equilibrium files were readily available. The LCFS of the expanded equilibrium was placed at a major radius R of 6.015 m and the gas puff was centered vertically at the midplane. Thus, the plasma there was up-down symmetric and not tilted as in the W7-X case (see Figures <ref> and <ref>). We specified the nozzle spatial locations relative to the nominal LCFS of the equilibrium and the T_e and n_e profiles relative to the same LCFS. We specified the viewing geometry by maintaining the W7-X viewing angles relative to the field lines local to the design FoV at the gas cloud as well as the path lengths for integration of the emission through the gas cloud as they would be on W7-X. The T_e and n_e profiles were approximated using measurements from the W7-X reciprocating probe system in the “standard” magnetic configuration<cit.> made on discharges produced during the previous experimental campaign, OP 1.2. Simulations of H_2 and He gas puffs with cloud half-angles of 18^∘ and flow rates of 4.3 × 10^19 #/s were performed. The resulting chord brightnesses in the simulated viewing geometry are shown in Figure <ref> for H_α and Figure <ref> for the He I (587 nm) line. The 1D kinetic neutrals code, KN1D, was run for hydrogen, where it computes the neutral transport, as well as neutrals-plasma interactions (dissociation, ionization, etc.) with stationary 1D radial plasma profiles, and yields, among other things, radial profiles of H, H_2, Lyman_α emission, and H_α emission. It considers both the atoms and the molecules. It is run with a boundary condition specifying the radially-inward neutral particle flux at a boundary far from the plasma. It was also run for He, yielding radial profiles of He I atoms and He I 587 nm line emission. We used the same T_e and n_e profiles that were used for the DEGAS 2 simulations. Since KN1D is a 1D simulation and does not consider the cloud shape and viewing geometry, we were primarily interested only in the radial profile of the line emissivities, and those profiles are compared with the brightness profiles from the DEGAS 2 simulations in Figures <ref> and <ref>. The KN1D emissivity profiles have been scaled to roughly match the DEGAS 2 brightness profiles. We primarily learned two important things from these simulations: * The predicted brightnesses over the FOV are in the range ∼0.2 to ∼2.5 mW/cm^2/ster for puff flow rates of ≈4 × 10^19 #/s into “standard” configuration plasmas. Recall (Table <ref>) that modeling indicated that this flow rate would produce a 10% density perturbation of a low-density W7-X plasma. * Locating the FOV in its design location places the maximum in the predicted brightness profile near the outside edge of the “default” orientation FOV (see Figure <ref>). Using this brightness range, we estimated the expected signal levels on the detectors in order to ensure that the system is sensitive enough to perform its imaging function. The estimated photon flux onto an on-axis pixel is Γ_on-axis(ph/s) = B^phR_mT_winT_lensesT_filtA_detΩ_image ≈ 1.19×10^12× B^mW(for H_α) ≈ 1.06×10^12× B^mW(for HeI 587 nm), where B^ph is the brightness in units of [photons/s/unit area/ster], B^mW is the brightness in units of [mW/cm^2/ster]; R_m is the reflectivity of the turning mirror; the Ts are the transmissions of the window, lenses, and filter respectively, and A_detΩ_image is the on-axis etendue. The estimate (supplied by the manufacturer) of the APDCAM detector sensitivity at a mid-range bias-voltage of 360 V was: 7×10^6ph/s corresponds to 1 digitization count. Thus an H_α brightness of 1 mW/cm^2/ster with a bias-voltage of 360 V would result in a 170,000 count signal level. Since this is 10.4× greater than the 14-bit full-scale level, we were assured that the system sensitivity was more than adequate and that a neutral density filter may be required in the system to reduce the light levels. This is addressed in subsection <ref>. § PERFORMANCE OF THE W7-X GPI SYSTEM The GPI system was installed on W7-X during the 2021-2022 time period between experimental campaigns OP 1.2 and OP 2.1 and commissioned just prior to the start of OP 2.1. Before describing the performance details of the key GPI components and the images obtained, we note that GPI measurements were made on over 80% of the viable W7-X shots/programs during the OP 2.1 campaign (November 2022 through March 2024), i.e. ∼1800 gas puffs distributed over ∼800 discharges. We address the machine safety issues first. Operationally, we chose to open the shutter at the beginning of a run day, i.e. not cycling it (closed-open-closed) for each program. The maximum temperature increase registered by the shutter thermocouple due to a single plasma program was 55 ^∘C in response to a 113 sec long P_in=5 MW plasma with a 100% radiation fraction. An initial rapid decay from that temperature increase, followed by a much longer decay time, indicates that the longer decay time is more representative of the bulk temperature of the shutter. Thus, the maximum bulk temperature rise for the shutter is probably significantly lower than 55 ^∘C. In any case, heat loads from OP 2.1 plasmas were not a problem for the shutter/mirror structure, implying as well that the radiative fluxes from the OP 2.1 plasmas are much less than 100 kW/m^2. This includes the plasma program with 1.3 GJ of input power, a W7-X record to date <cit.>. Furthermore, the maximum temperature rise of the vacuum window from any W7-X plasma during the campaign, as measured by the window thermocouple, was 1.4 ^∘C, and, therefore, also appears to be robust to operational thermal loading. Since a key heating system for W7-X is the Electron Cyclotron Resonance Heating (ECH) system (providing up to 7.5 MW) and since the GPI re-entrant tube is in a port adjacent to an ECH launcher port, we were also concerned about a heat load on the fast camera from stray ECH radiation that propagates up the re-entrant tube to the camera. We were not confident in the results from attempts to model this possible heat load on the camera. Thus, during OP 2.1, we measured the ECH heat flux through the aperture immediately in front of the camera using an ECH bolometer provided by the ECH group <cit.>. With the GPI shutter open and using the same 113-second P^ECH_in=5 MW plasma discharge that resulted in the 55 ^∘C temperature increase on the shutter thermocouple, the stray ECH power flux passing through the aperture just in front of the camera was found to be ≈ 70 W/m^2, a level much smaller than one that would damage the camera. In the two following subsections, we review the performance of the GPI system in three crucial areas: the perturbation of the gas puff on the global W7-X plasma, the signal-to-noise in the images, and the spatial resolution of the images. §.§ The gas puff We quantify the perturbation to the W7-X plasma from the GPI gas puff by examining the line-integrated density obtained from the W7-X single channel dispersion interferometer system <cit.>. This measurement does not pass through the localized puff-plasma interaction region. The perturbation analysis is complicated by W7-X's excellent density feedback control that uses the line integrated density, n_el(t), as the sensor. The results of the analysis are summarized in Figure <ref>, where the maximum change in n_el(t) due to the puff is plotted vs. the total amount of (H_2) gas puffed for both 0.05 s and 0.1 s valve-open durations. The red circles are the results of a plenum pressure scan accomplished during the commissioning phase of the OP 2.1 campaign before the density control feedback system was operational. These perturbations in the range of puff amounts that were typically used during the campaign, indicated by the blue shaded region, are ⪅0.12× 10^18 m^-2. Recalling the perturbation modeling discussed in subsection <ref>, we note that this perturbation is <6% of the lowest density plasmas (n_el≈2× 10^19 m^-2) typically run in W7-X and implies a fueling efficiency, ϵ_fuel, of roughly 0.4. The vast majority of OP 2.1 GPI measurements were done with the density control feedback on. A sub-sample of those for which a perturbation assessment could be made is shown by the black circles in Figure <ref>. Most of those feedback-on points show a maximum n_el(t) perturbation <0.4×10^18 m^-2, indicating that the system successfully corrects for the GPI injection by reducing the fueling flow rate, with the correction occurring within roughly 0.3 s. The feedback-on points above that value are from plasmas very early in the campaign when the control system was being tuned up. We do not know the reason for the two feedback-on outliers at 5 mbar l. They are from the same run day, and the feedback response looks uncharacteristically slow. In any case, the global perturbations are small, typically ≪3%, and this is a primary reason why the GPI measurements were made on such a large percentage of W7-X shots/programs. §.§ Light signals, images, and spatial resolution It was made clear in previous Sections that getting images with excellent signal-to-noise ratio (SNR) was a primary priority. The DEGAS 2 modeling indicated that this would be the case for expected gas puff flow rates, and this proved to be true. Almost all of the OP 2.1 results were for H_2 puffs into H-majority plasmas, although for two run days, helium was puffed into He-majority plasmas. Since the puff perturbations were small, we chose to maintain good spatial resolution by keeping the backing pressures/flow rates in mid-range in order to keep the toroidal spread of the gas cloud small (i.e., with α_1/2≈14^∘ - see Figure <ref>). Under these conditions, we had to decrease the light flux onto the detectors to keep them from saturating. This was typically accomplished using a 5% neutral density filter in front of the H_α (or He - 587 nm) interference filter. This also allowed us to maintain high signal levels by changing the APD gains in response to changes in light levels due to different plasma conditions. The gain curve from one of the detector pixels is shown in Figure <ref>; a dynamic range of ≈6 is evident. Typically the high voltage for each of the four regions on the APD array was set independently throughout an experiment day so that signals from each region would be as high as possible without incurring any saturation of the 14-bit (16384 count) digitization range. A typical single pixel signal time history is shown in Figure <ref>. The electronic noise on the digitized signals is 25-30 counts. During the “physics-measurement" portion of the time history (the cyan-shaded region), the average signal level in the Figure <ref> example is ∼10000 counts (about 60% of full-scale), yielding an SNR of 370. For plasmas in the “standard” configuration, typical SNRs vary from ∼50 to ∼500 and can vary significantly over a single image. Nonetheless, these typical values are certainly enough to provide very low-noise images. The “hash” on the signal during the puff duration is from emission fluctuations due to plasma fluctuations and is exactly what GPI is supposed to measure. Brightness levels averaged over the FOV (in the “standard” configuration) are typically in the range of 1 to 2.5 mW/cm^2/ster, with an example of the spatial variation of the time-averaged brightness shown in Figure <ref>. The absolute brightness calibration comes from a full system calibration on the installed diagnostic using an absolute continuum source placed in the FOV during in-vessel access in 2023 and is not based on the estimates used when evaluating Eq. <ref>. The long dimension of the FOV in this camera orientation is essentially poloidal, while the short dimension is radial, i.e. normal to the LCFS. The poloidal distribution of the average brightness is roughly centered in the FOV and varies by about a factor of 2 for a constant radial coordinate. The radial profiles have brightness maxima somewhat inside the outermost edge of the FOV and vary by about a factor of three across the FOV at a given poloidal coordinate. Thus, the emission “coverage” throughout the FOV is quite good for the “standard” configuration, as was desired in the design. We have investigated the quality of our DEGAS 2 and KN1D modeling predictions by comparing them with actual measured profiles even though the predictions were based on n_e and T_e profiles from 2018 W7-X plasmas. The comparisons (made after scaling the measured brightnesses by the ratio of the simulated to measured flow rates) are shown in Figures <ref> and <ref>, with the H_α profile measurement coming from the time-average shown in Figure <ref>. The predictions for H_2 puffs are quite good, given that the predictions were made using profiles from a different plasma. The prediction for the He puff is 3× to 4× larger than what was measured although the profile shape is reasonable. We do not know the reason for this overestimation; it cannot be blamed on the different n_e and T_e profiles. Of course, the radial distributions of the emission change depending on the magnetic configuration, the position of its LCFS relative to the FOV (see Table <ref>) and the local plasma parameters. There is sufficient emission in all of the common configurations except “high iota”, where emission from the outer part of the FOV is much reduced and necessitates the removal of the 5% neutral density filter. It is evident in Figure <ref> that there is a non-zero light signal before the gas puff. That is due to intrinsic H_α emission along the line of sight. It is best for imaging local fluctuations that this emission be small compared to the emission due to the gas puff, as it is in the example shown in the Figure. We have examined under which conditions the intrinsic emission may not be sufficiently small. The intrinsic emission is approximately linear with the plasma density but is still small compared to the puff emission as long as the plasma radiative fraction is less than about 60% and the line-averaged density is less than ∼ 8×10^19 m^-2. Above that radiative fraction, the plasmas tend to be detached from the divertor targets, and the puff emission can be small since the radiation front moves radially inwards, out of the FOV. Under those circumstances, the intrinsic emission can be 10 to 50% of the puff emission and the images will be somewhat compromised. To investigate those plasmas, it is best to puff He gas into H_2 majority plasmas, thereby avoiding the intrinsic emission issue completely. However, in the large majority of OP 2.1 W7-X plasmas, this intrinsic emission was very small and did not compromise the images. We now examine the issue of spatial resolution in the images, estimating the wavenumber range, the minimum distance between two features that can be resolved, and the minimum size of the image of a very thin field-aligned emission filament when viewed in the actual geometry, which we define as the “instrumental resolution”. Ideally, this latter quantity would be the size of the region viewed by a single pixel, but as will be shown, there is smearing of filament images due to the toroidal extent of the gas cloud. With the camera in the “default” orientation, 16 pixel views are distributed poloidally. The view centers are separated by Δ_pol≈4.5 mm, resulting in a k_pol range of 2π/(2L) = 0.45 cm^-1⪅ k_pol⪅6.9 cm^-1=2π/(2Δ_pol), where the lower limit is based on the assumption that we can discern one-half of a wavelength using the full poloidal length, L, of the FOV. The pixel-view spacing in the radial dimension is non-uniform, but has an average spacing of 5.3 mm and an estimated k_rad range of 0.84 cm^-1⪅ k_rad⪅ 5.9 cm^-1. To resolve two features of equal brightness as distinct, at least three contiguous pixel views must be examined, with the brightness of the center view being less than that of its neighbors. In our case, this spatial resolution criterion gives a poloidal (radial) resolution of 2Δ_pol≈9 mm (2Δ_rad≈11 mm) respectively, as long as the “instrumental resolution" allows it, as we will examine next. If the gas cloud were a thin sheet in the “FOV registration plane”, then the sizes, as well as [R_maj,Z] locations, assigned to features in the images would be “exact”, limited only by the size of the area viewed by each camera pixel and the accuracy of the spatial calibration. However, the toroidal spread of the gas cloud and the fact that our sight lines are not quite field-aligned within the gas cloud lead to the smearing of field-aligned emission structures in the images. Our analysis of this “instrumental” effect relies on our knowledge of the gas puff's double-Lorentzian distribution in planes normal to the nozzle face and knowledge of the sight lines over which the light is collected. Since we are most interested in imaging turbulent structures that are largely field-aligned with k_∥≪ k_⊥, we model field-aligned filaments. We assume the filament emissivities are proportional to the local gas pressure in the plane parallel to the nozzle face, i.e. the planes in which the distributions were measured. This ignores the reality that those planes are not tangent to the local fluxes surfaces (there is an ≈ 9^∘ angle between them), and we presume that this introduces only second-order errors in the analysis. Each modeled filament is assigned a puncture point in the “FOV registration plane”. We examine brightnesses of a large number of sight lines that image to the detector plane, constructing a high-resolution image of the “FOV registration plane”. An example using a set of ten simulated filaments, each with a 2.4 mm diameter circular cross-section, is shown in Figure <ref>. It shows the “instrumental broadening”, with the images of the thin filaments strongly elongated dominantly in the major radial (R_maj) dimension, leading to significant radial smearing, especially in the top half of the FOV. Also shown in the Figure are the centers of the views of the actual detector pixels (red dots) along with the viewed area of a single pixel (the white rectangle). It is clear that the modeled smearing can be larger than the radial size of the area viewed by a single pixel. Combining the extent of this smearing with that of the areas viewed by actual pixels enables us to estimate (below) the “instrumental resolution”, which must always be greater than or equal to the area viewed by a single pixel. We summarize the key findings from our spatial resolution analyses and modeling: * The detector-limited wavenumber resolution is 0.45 cm^-1 < k_pol < 6.9 cm^-1 and 0.84 cm^-1 < k_rad < 5.9 cm^-1 (with the camera in the “default" orientation). * To resolve two features of equal brightness as distinct in the poloidal (radial) dimension, they must be separated by at least 9 (11) mm (in the “default" orientation). However, the “instrumental resolution” in the top portion of the FOV is such that the 11 mm spatial resolution in the radial dimension is marginal there - see the following. * Modeled images of field-aligned structures show elongation dominantly in the R_maj dimension. The FWHM along the long axes of these elongated projections range from ≈6 to 10 mm. The presence of such elongated structures in the images may complicate any cross-correlation analysis if they are moving in a direction other than one parallel to either the major or minor axis. * In the bottom half of the FOV, the radial and poloidal FWHM of the modeled images range from ≈2.5 to 4.7 mm and ≈ 2 to 3 mm, respectively, and thus are essentially the same or smaller than the 4.65 mm dimension of the area viewed by a single pixel in that part of the FOV. * Modeled images of field-aligned structures in the top half of the FOV show mostly radial elongation with radial FWHM of ≈4 to 9 mm (in the “default" orientation). * Combining the optical resolution with the modeled smearing yields values of “instrumental resolution” in the range ≈ 4.6 to 5.5 mm in the poloidal dimension and ≈ 6 to 10 mm in the radial dimension (in the “default" orientation). The resolutions are better closer to the nozzle, i.e. further out into the SOL. * From the modeling, a field-aligned structure piercing the “FOV registration plane” at [R_maj,Z] is imaged with a brightness centroid within the pixel viewing that [R_maj,Z] point in the “FOV registration plane”. In other words, the [R_maj,Z] location assigned to a feature in the images will have pierced that viewed region at the toroidal angle of the “FOV registration plane”. §.§ Example of 2D dynamics measured by GPI We now provide an example illustrating the measurement capabilities of the GPI system for characterizing 2D dynamics of plasma fluctuations in the SOL island region. We investigate a quasi-coherent mode (QCM) present in some W7-X discharges <cit.>. In this instance, the OP 2.1 shot/program is 20221214.013. We analyze images from a GPI puff at 7.0 s when P_ECH=2.0 MW, n_el=4.5×10^19 m^-2, and I_p=2.5 kA. A QCM with a center frequency of ≈18 kHz is present in some pixel views but definitely absent in others. The spatial arrangement of the magnitudes of the signals' power spectral densities (PSD) in the mode is shown in Figure <ref>(a). The mode is present in two radially separated regions. The W7-X magnetic configuration is “standard” so the view of the SOL and island is similar to the one shown in Figure <ref>, but the radial width of the island is significantly larger in this specific case. The island location and size are illustrated in Figure <ref>(a), where the white solid circle indicates the location of the island O-point, and the white dashed lines show the contour for a 2 km connection length. It is reasonable to conclude that the island plays a key role in the mode location. Examining the dynamics of fluctuations both visually in the videos and analytically, we find the motion is very dominantly in the poloidal direction, which allows accurate evaluations of poloidal wavenumbers and phase velocities using Fourier analysis in the poloidal dimension and time. Frequency-normalized (k_pol|f) Fourier spectrograms, two of which are shown in Figures <ref>(b & c), provide a wealth of information. There is a layer of strong radial shear in the poloidal phase velocities, with the propagation direction changing sign from downward (in the two innermost columns of views) to upward (in the radially outboard columns of views where the QCM reappears). Multiple shear layers within the radial extent of the GPI FOV are not uncommon. Focusing on the QCM, we find it propagating poloidally in the innermost column of views (ρ=r_view-r_LCFS = 16.4 mm) at ≈ -2.6 km/s (in the lab frame), while in the ρ= 43.8 mm column of views the phase velocity is ≈ 3.5 km/s. The poloidal wavenumbers of the mode are k_pol≈±0.4 cm^-1, i.e. approximately GPI's minimum resolvable k_pol. The mode is moving poloidally along with the broadband fluctuations. The frequency-FWHM of the mode is ≈7 kHz, thus Δ_FWHM/f_0≈0.4. Finally, we note that there is another QCM present in these data - the ∼2 kHz mode present in most W7-X plasmas <cit.>. It is apparent as the strong very low-frequency feature at k_pol≈0 in the Figure <ref> spectrograms. This brief analysis illustrates some of the phenomena being investigated by GPI on W7-X. § SUMMARY In this report, we describe the considerations that went into the design of a new GPI system for the W7-X stellarator. We also describe in detail the different components that make up the W7-X GPI system, and finally, the performance of the realized system. Most important is the result that the system operated reliably during the OP 2.1 experimental campaign, acquiring excellent images for a large fraction of W7-X OP 2.1 plasmas. Detailed analyses of the images will provide an assortment of new and valuable scientific information. We reiterate the key performance results: * Excellent SNR in the 2D (radial and poloidal) images acquired at 2 Msamples/s * Minimal perturbation to the W7-X plasma by the gas puff * Spatial coverage over a ≈40 × 75 mm FOV with a spatial resolution of ≈ 5 mm poloidally and ≈8 mm radially * A FOV that is typically in the SOL and includes the O-point of an outboard magnetic island in some W7-X configurations and an X-point in another configuration. It is also important to emphasize some of the unique features of this GPI system: * A “converging-diverging" nozzle design and fabrication that provides significant collimation of the gas cloud and Mach 1 flow at the nozzle “throat". * Confirmed collimation of the gas cloud that facilitated placing the nozzle at a safe 110 mm from the center of the FOV. To the best of our knowledge, this separation is the largest of any GPI system to date. * A re-entrant tube of ∼2 m length holding a shutter, a turning mirror, a vacuum window, and collection optics that traverses the radial extent of W7-X's superconducting field coils and their cryostat. * A realized design that features robustness to heat loading on plasma-facing components in a long-pulse/steady-state fusion-relevant device. This includes water cooling of the re-entrant tube. Also implemented are a pair of flexible Cu straps that conduct heat from the movable shutter to a water-cooled “cold foot". We cite the flexible straps as a possible solution for similar future situations on fusion devices. * Direct imaging of the light collected via the turning mirror and lenses onto the fast camera sensor. By building in the capability to rotate the camera about the optical axis of the collection optics, we have registered the locations of two fields-of-view and can switch between them as desired. The “default" FOV has the long dimension of the view oriented poloidally, while the long dimension of the “rotated" FOV is radial, i.e. normal to the LCFS. The authors thank Dr. Timo Schröder for crucial contributions to the data acquisition process, the entire W7-X staff at IPP Greifswald, and engineer Rui Vieira at the MIT-PSFC. This work was supported in part by the US Department of Energy, Fusion Energy Sciences, Award DE-SC0014251. This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 — EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. []#1 The authors have no conflicts to disclose.
http://arxiv.org/abs/2405.10074v1
20240516131126
Planning and Optimizing Transit Lines
[ "Marie Schmidt", "Anita Schöbel" ]
math.OC
[ "math.OC" ]
Continuous Transfer Learning for UAV Communication-aware Trajectory Design Chenrui Sun1, Gianluca Fontanesi2, Swarna Bindu Chetty1, Xuanyu Liang1, Berk Canberk3 and Hamed Ahmadi1 1School of Physics Engineering and Technology, University of York, United Kingdom 2Interdisciplinary Centre for Security, Reliability, and Trust (SnT), Luxembourg 3, Edinbrough Napier University, Edinbrough, United Kingdom Received April 30, 2024; accepted Month Date, Year ================================================================================================================================================================================================================================================================================================================================================== Abstract For all line-based transit systems like bus, metro and tram, the routes of the lines and the frequencies at which they are operated are determining for the operational performance of the system. However, as transit line planning happens early in the planning process, it is not straightforward to predict the effects of line planning decisions on relevant performance indicators. This challenge has in more than 40 years of research on transit line planning let to many different models. In this chapter, we concentrate on models for transit line planning including transit line planning under uncertainty. We pay particular attention to the interplay of passenger routes, frequency and capacity, and specify three different levels of aggregation at which these can be modeled. Transit line planning has been studied in different communities under different names. The problem can be decomposed into the components line generation, line selection, and frequency setting. We include publications that regard one of these individual steps as well as publications that combine two or all of them. We do not restrict to models build with a certain solution approach in mind, but do have a focus on models expressed in the language of mathematical programming. Keywords transit, line planning, public transport, passenger, model, integer programming § INTRODUCTION What is transit line planning? Colloquially speaking, transit line planning is to decide which public transport lines are to be established when the stops or stations and possible direct connections between them are already known, and demand between these can be estimated. More precisely, the decisions concern the number of lines to be established, their routes and the frequencies with which they should be operated. The result is a line plan together with the frequencies of the lines. The lines themselves can be depicted on a map, as, e.g., the well-known map of the London underground. As an example, a map of the line plan of the city of Berlin (including U-Bahn and S-Bahn lines operational in 2009) is shown in Figure <ref>. Relevance of transit line planning. Designing a network of transit lines and deciding upon their frequencies is a core process in transit planning. The decision on which locations are connected by direct lines can have a big impact on mobility patterns in the considered region. In schedule-based transportation, transit line planning sets the ground for the subsequent steps such as timetabling, vehicle- and crew scheduling, delay management and others. As such, transit line planning is considered the basis for the public transport supply and the decisions taken at this planning stage as fundamental for most performance indicators (such as efficiency, emissions, costs). In particular in a metropolitan area, a transit network coordinates different modes such as buses, light rail, trams, or metros. Nevertheless, line planning is a relevant problem also for railway applications. We hence also refer to some papers which are formulated in the railway context. Different names - clarification. Transit line planning has mostly been researched in transport engineering and in operations research. This explains a hodgepodge of subproblems and variations with different names and notations. In order to give a systematic view, we split transit line planning into three subproblems (see Figure <ref>) showing the overlap of TNDP and line planning. The three subproblems of transit line planning are: * Line generation: a set of candidate lines is constructed. The result is the line pool. * Line selection: From a given line pool, the lines to be established are selected. The result is the line plan. * Frequency setting: Frequencies are assigned to the selected lines. The lines together with the frequencies are called line concept. The transit (route) network design problem (TNDP) (also called (TRNDP)) mostly researched in transport engineering, covers the subproblems line generation and line selection. A line pool as intermediate result is not needed in the TNDP. Frequency setting is left as separate step after the lines have been decided. The problem that integrates line generation, line selection, and frequency setting is known as transit network design and frequency setting (TNDFS) in the transport engineering community, see, e.g., <cit.>. The term line planning, mostly used in operations research, integrates the subproblems line selection and frequency setting. As input, most researchers assume a line pool as given. There are also publications in which the lines are constructed from scratch within the line planning problem, e.g., <cit.>. Such models integrating line generation, line selection and frequency setting are in operations research called line planning with the full pool, since all possible lines can be selected in this setting. Line planning with the full pool and transit network design and frequency setting (TNDFS) are hence synonyms for the integration of all three subproblems. In this paper we use the notation transit line planning whenever line generation, line selection, frequency setting or a combination of them is involved. Development of transit line planning. The first paper on transit line planning we are aware of is <cit.>, published a century ago. Although dealing with the same type of problem, there has been little overlap between the literature in the transport engineering and operations research communities in the early years. Nevertheless, both communities mention only single papers from the 60s, while research on the topic started in the 70s and evolved to numerous papers in the 90s and in the first decade of this century. Since then, research on transit line planning has continued and the overlap between the communities has continuously increased. Several surveys exist, most of them from more than a decade ago: The survey <cit.> is mainly on line planning (line selection and frequency setting) while the surveys <cit.> are mainly about the transit network design problem (TNDP). The recent survey by <cit.> concentrates on developments in TNDP und TNDFS since 2009. In this chapter we concentrate on modeling aspects of transit line planning including transit line planning under uncertainty. Moreover, we specify three aggregation levels and classify approaches with respect to them. In contrast to vehicle and crew scheduling, algorithms for transit line planning are still not commonly used in commercial software systems, as mentioned, e.g., in <cit.>. This is about to be changed. In the open-source research library LinTim, see <cit.>, many algorithms and data sets for transit line planning are available and the method of <cit.> is currently further developed for the use in the professional Heurès software of <cit.>. Embedding of transit line planning in the planning horizon. The planning process in transit is normally divided into several stages where the output of a stage serves as input to the subsequent stage, see Figure <ref> which can be found in similar form in many papers from the 1980s to now, see, e.g., <cit.>. The first stage within the planning horizon, infrastructure location decisions, decides about stops and their direct connections. It is followed by transit line planning, which is the focus of this chapter. In schedule-based operations, the next planning stages are timetabling specifying the exact departure and arrival times for the transit lines, and vehicle scheduling assigning vehicles to . In a line-pure schedule, each vehicle is assigned to one line which it serves back and forth the whole day. Crew scheduling is the next stage. In headway-based transportation (also referred to as schedule-free transportation), operations are not meant to adhere to fixed timetables, but only the headway of a line is known, i.e., the time between two subsequent departures. Passengers hence go to a stop and wait there for their bus coming. Other planning steps include crew rostering <cit.> or fare setting <cit.>. Solution approaches for transit line planning. Transit line planning invites a variety of solution approaches, spanning from construction heuristics, metaheuristics, and mixed-integer programming approaches to game theoretic approaches. Since this chapter is mainly about models, we recommend <cit.> to learn more about transit line planning heuristics, including early contributions. The review papers <cit.> give a good overview on transit line planning with metaheuristics. For a game theoretic approach we refer to <cit.>. There is also a stream of research aiming to derive optimal frequencies for transit lines analytically by continuum approximation, starting with the research by <cit.> on a single line and later been extended to spatially diversified demand, compare, e.g., <cit.>. This class of models with parametric descriptions of network structure and demand is described in Section <ref>. Remainder of this chapter. Section <ref> describes the basic notation, introduces a first line planning model and discusses modeling of the fundamental elements: line pool and frequencies. The section also covers performance indicators for transit line planning. Passengers play an important role in line planning. Their routes relate the three key concepts frequency, capacity, and demand. This relation, and the many modeling approaches for routing passengers within transit line planning are discussed and classified in Section <ref>. Section <ref> describes how uncertainty can be incorporated by robust and stochastic transit line planning models. We treat uncertainty of demand, of driving times and link failures. Extensions and related problems are discussed in Section <ref>. This includes the skip-stop problem, seasonal demand, the integration with other planning stages and parametric transit line planning as well as a sketch of other interesting related problems. We conclude the chapter in Section <ref>. § BASIC MODELING CONSIDERATIONS FOR TRANSIT LINE PLANNING In the following we first introduce the basic notation needed and specify the variables and the general goals of transit line planning in Section <ref>. With this notation it is already possible to present the basic feasibility constraints and a first model as a building block for possible extensions in Section <ref>. Modeling the line pool is treated in Section <ref> and different ways of modeling the frequencies are shown in Section <ref>. §.§ Basic notation, variables and general goals Many different versions of the transit line planning problem have been formulated in the literature. So there is not a single 'transit line planning model', but there are many models for each of the subproblems line generation, line selection and frequency setting or for their combinations. They differ in the level of accuracy in which they model important aspects like demand or cost, but also in the objective function and in their constraints. To formally define a line, we first need a model for the underlying infrastructure. The public transportation network (PTN) =(,A) consists of nodes which represent stops or stations and of links (also called arcs) A which represent direct (i.e., non-stop) connections between stops. Link labels in the PTN represent distance which can be measured as physical distance in meters or as travel time in minutes. A line corresponds to a path in the public transport network . In the basic transit line planning models, it is assumed that the line stops at every node of the path. The line generation problem constructs a line pool of candidate lines. The goal of the line selection problem is to select lines to be operated from the line pool . We refer to the subset of selected lines ⊂ as line plan. Sometimes, binary indicator variables y_l ∈{0,1}, l ∈ are used to model which lines are selected. The frequency f_l of a line l denotes how often the line is operated per time period T (often: per hour) and is usually required to be a natural number. The frequency of a line has a crucial impact on cost (km driven, number of vehicles needed, number of crew members needed) and service level (expected waiting time, capacity per hour). For a given line plan , the frequency setting problem determines frequencies to operate the already selected lines l∈. Let (f_l)_l ∈ be the vector which contains the frequencies of all lines l ∈. A line plan (, (f_l)_l ∈) together with its vector of frequencies is called a line concept. The line planning problem (see Figure <ref>) asks for a line concept, i.e., for both the lines and their frequencies. These two subproblems can be modeled integratedly by using the frequency vector (f_l)_l ∈ for all lines l ∈. f_l=0 means that line l ∈ is not selected, i.e., :={l ∈: f_l >0}. The variables y_l may not be needed in this integrated setting, but can be obtained by using y_l=1 if and only if f_l >0. For evaluating a line concept, performance indicators can be split into two groups: * demand-centered indicators can be coverage, travel time, directness, or the number of connections per hour. Their general tendency is that 'more is better', i.e., increasing lines, frequency, or capacity normally has a positive effect on these indicators. In order to evaluate demand-related indicators, we assume that for every pair (s,t) with s,t ∈ V it is known how many passengers _s,t wish to travel from station s to station t. The corresponding matrix =( _st)_(s,t) ∈ V × V is called OD-matrix. ={(s,t)∈ V × V : _s,t >0} denotes the set of origin-destination (OD) pairs. * supply-centered indicators include emissions and different types of cost, e.g., related to distance driven, or vehicles or crew needed. Here, the heuristic principle 'less is better' holds, as every additional line or frequency increases cost and emissions. For computing supply-centered indicators we assume that costs or emissions can be estimated. This includes costs/emissions per kilometer traveled, dependent on the vehicle type used, and costs per hour traveled reflecting, e.g., cost for the driver or conductor. All transit line planning models need to find a balance between these two groups of indicators and integrate them at least in a basic way. It is important to note that the evaluation of a line plan can only be a (rough) estimate of the actual realization of cost and quality: As long as only the line plan and the frequencies of the lines are known, the cost and the travel times cannot be computed exactly, but only estimated since travel time depends on the timetable and cost depends on the vehicle- and crew schedules. This gives room for different performance indicators described in more detail in Section <ref>. §.§ A basic line planning model We start with one of the simplest integer linear programming models from the literature, compare, e.g., <cit.>. It solves the line selection and frequency setting problems integratedly (with a given line pool) and hence belongs to the class of line planning models. Basic feasibility constraint. The following basic feasibility constraints are part of most transit line planning models. Let be the given line pool. As decision variables we use the frequencies f_l for l∈. We require them to be non-negative and integer. A frequency of f_l=0 means that line l is not operated. The basic feasibility constraints are L_ ≤∑_l ∈: l∋ a f_l ≤ U_ ∀∈ A f_l ∈_0 ∀ l∈. The constraints (<ref>) bound the cumulative frequency of each link a∈ A from above and below. Upper bounds support the supply-centered indicators. Small upper bounds U_a lead typically to less expensive line concepts. The upper bounds are often also due to the capacity of the underlying infrastructure. In particular in rail-based systems where safety headways need to be respected, there is a strict upper bound on the number of that can cross the arc ∈ A in one period. But there may also be reasons to constrain the frequencies in road-based transport, e.g., noise protection, or avoiding road damage by over usage. The lower bounds are due to the demand-centered indicators. They impose a minimum cumulative frequency per link, motivated by service level or by capacity considerations. Finding frequencies that fulfill constraints (<ref>) and (<ref>) already is strongly NP-hard, see <cit.> In the reduction, maximum frequencies of the constructed instance are set to 1, thus, the proof implies strong NP-hardness of the line selection problem as well. Basic cost model. The basic feasibility constraints can be extended to the following line planning model which is often used as reference or to show new developments. Let _s,t specify the number of travelers per period between s and t, s,t ∈ V in the PTN. In the cost model, the lower bounds L_ for each link are determined as follows: Each OD-pair _s,t is routed along a shortest path P_s,t in the PTN. For every link , d_:=∑_(s,t): ∈ P_s,t_s,t is the number of travelers (the traffic load) on link . Let C be the (constant) capacity of the vehicles. Then, L_:= ⌈d_/C⌉ is the minimum number of vehicles needed per period along link to ensure that all passengers can travel on a shortest path P_s,t. Let c_l be the cost of line l ∈. It is often assumed that it is composed by a constant fixed cost and costs related to the length of the line and the time needed for a complete round-trip. Then the basic cost model min((,f)) := ∑_l ∈ c_l · f_l L_ ≤ ∑_l ∈: l∋ a f_l ≤ U_ ∀∈ A f_l ∈ _0 ∀ l∈ℒ finds a line concept with minimal cost in which all passengers can travel on their shortest paths in the PTN. The model originally stems from <cit.> and has been used in many other papers, e.g., <cit.>. §.§ Line pool generation The basic cost model (<ref>) is formulated with a given line pool as input. However, the choice of a suitable line pool to use as input for the line selection problem is non-trivial. In fact, the choice of a good line pool has a significant impact on the quality of the line concepts, as shown in <cit.> There are, to the extent of our knowledge, only few papers that study line pool generation in isolation. <cit.> show that the line generation problem is (strongly) NP-hard if a set with a bounded number of lines satisfying constraints (<ref>) is searched. There is a number of publications that study line pool generation together with the subsequent transit line planning steps, i.e., the (TNDP), in a sequential manner. In a first step, promising lines are created as paths in a PTN, in a second step, lines are selected. The result of the first step then is a line pool. This has been done already in the early papers by <cit.>, but is still used in more recent contributions, e.g., <cit.>. In numerous contributions that use metaheuristics for solving the TNDP, one or several line plans are proposed, evaluated, and subsequently improved in each iteration, see <cit.> for an overview. The idea of generating promising lines during the optimization process is also gaining popularity in the operations research community, for example, by column generation in <cit.>. <cit.> prove complexity results and state exact algorithms for the TNDP on specific graph structures. Most problem formulations for line generation work under the assumption that a public transport system, and in particular the line pool, needs to be constructed from scratch. A notable exception is <cit.>, where preexisting lines can be replaced by new lines with same start and end terminals under the constraint that line length may not deviate by more than a given factor from the shortest path from start to end terminal. Having generated all lines that fulfill these properties, they aim at choosing a subset of them that maximizes the number of additional direct travelers. This is modeled as a linear integer program. <cit.> argue that existing, manually constructed line plans often contain knowledge of transport planners that should not be discarded lightly, and propose to use this expertise to construct a line pool, from which lines are selected by optimization methods. <cit.> propose to generate line pools by combining automated steps and steps that include manual choices. §.§ Modeling frequencies The basic feasibility constraints (<ref>), (<ref>) model frequencies by an integer variable f_l that represents the number of trips (or line runs) per period. <cit.> argue that to improve memorability and practicality of a line concept, all frequencies should be multiples of a (given) system frequency i≥ 2. This can be modeled by the additional constraints f_l=i·α_l  ∀ l∈ α_l ∈_0  ∀ l ∈ . An alternative approach to model frequencies is to use indicator variables z_l^ϕ that take the value 1 if line l has frequency ϕ, see, e.g., <cit.>. Let Φ be the set of all allowed frequencies. We then can replace f_l by ∑_ϕ∈Φϕ· z_l^ϕ. Constraints ∑_ϕ∈Φ z_l^ϕ≤ 1 ∀ l ∈ ensure that at most one frequency ϕ∈Φ is assigned to each line l. There are several lines of reasoning behind the choice of the set Φ. When the line plan to be constructed is supposed to serve as a basis for a regular timetable, i.e., a timetable in which the headways between two consecutive trips of the same line are exactly h_l=T/f_l, one can argue (compare, e.g., <cit.>) that candidate frequencies should be divisors of T, so that headways and thus scheduled time points are integer. On the other hand, only lines with frequencies that are powers of the same base can be scheduled effectively to minimize transfer times while preserving regularity. E.g., for a planning period of T=60 minutes, transfers between two lines l and l' with frequencies f_l=2 and f_l'=4, respectively, can be easily scheduled such that short transfers are possible twice an hour. For line l with f_l=2 and l̃ with f_l̃=3, on the other hand, we can achieve at most one short transfer if we want to preserve regularity of the schedule. <cit.> propose to strengthen line planning models by introducing the concept of frequency configurations, i.e., by enumerating which combinations of line frequencies can be chosen to fulfill constraints (<ref>). They show that adding such inequalities to transit line planning models can improve their integer linear formulation and speed up computation times. §.§ Transit line planning performance indicators A multitude of demand- and supply-centered performance indicators and combinations thereof are used to evaluate the quality of line concepts and line plans, and (to a lesser extent) also line pools in the literature. All indicators can be used as a constraint (both budget and service level constraints are common) or as (part of) the objective function of an optimization problem. §.§.§ Supply-centered indicators A simple and common way to consider cost in transit line planning is to introduce a cost parameter _l that models the operating cost for one trip of line l and is often assumed to be proportional to the length of line l. Using frequency variables f_l for a line pool , the total cost of a line concept can then be computed as ∑_l∈_l f_l, as it is done in (<ref>). These cost are referred to as variable cost, while _l that are related to the introduction of a new line are referred to as fixed cost. For modeling fixed cost we need the indicator variables y_l and receive ∑_l∈_ly_l. When different types of vehicles are considered, these cost may depend on vehicle type as well, compare, e.g., <cit.>. In models that do not use a given line pool but construct lines as part of the optimization step, fixed or variable line cost can be modeled by adding up _a of all PTN arcs contained in line l, i.e., c_l:=∑_a ∈ L_a compare, e.g., <cit.>. Energy cost are for a fixed vehicle type in a non-mountain region proportional to kilometers driven <cit.>. Emissions also play an increasing role in transit planning, see, e.g, <cit.>. They also depend on the vehicle type and on the distance covered and can thus be modeled in the same way as (energy) costs, compare, e.g., <cit.>. <cit.> remark that the use of electric vehicles may require additional infrastructure such as charging stations and include the cost of locating these into the objective function. In absence of more intricate cost models, variable costs are also used to estimate vehicle acquisition or maintenance cost, since these roughly correlate with the distance driven. In order to correctly model these cost types, an estimation of the number of vehicles to operate the line concept is needed. This is normally done under the assumption of line-pure vehicle schedules. In this case, the number of vehicles needed can be estimated as ∑_l ∈ time_l · f_l/T , where time_l denotes the time needed to complete a trip and T is the period. <cit.> argue that the numbers of vehicles needed should already be considered during the construction of a line pool: If the distance covered by a line is such that the line needs (slightly less) than a multiple of a planning period length to complete a trip forth and back, it can immediately be employed after finalization one trip (in a line-pure vehicle schedule), leading to higher utilization of the available rolling stock. In the context of a subline frequency setting problem <cit.> explicitly determine the number of vehicles for every subline based on the length of the sublines and their frequencies. The number of vehicles needed can then be added to the objective function. Also <cit.> and <cit.> (in a railway context) compute the cost for operating the vehicles based on estimated vehicle numbers. All above-described costs (and emissions) are additive at the line level, i.e., each line or line-frequency combination is attributed individually with a cost and/or an emission value. The overall cost (or emissions) of a line concept are then computed as sum over the cost per line. §.§.§ Demand-centered indicators Indicators for connectivity and directness from network science can be used to analyze line plans in transit, compare, e.g., <cit.>. However, optimization models in transit line planning use more specific indicators which are described in the following. Passengers' travel time is one of the key service quality measures used in transit line planning, and is also used in order to distribute passengers in the network (compare Section <ref>). We distinguish between the following components of travel time: * Driving time is the time spent in a driving vehicle. Usually it is computed based on the kilometers driven and on the speed of the vehicle. * We refer to the time that a passenger spends in a standing vehicle at a stop where she does not board or alight as waiting time. * In-vehicle time aggregates driving time and waiting time. * The transfer time is the time between arriving at a station in one vehicle, and departing from it in a different one. In contrast to driving and waiting time, the transfer time depends crucially on the timetable. * Adaption time is the time passing between the moment that a passenger desires to depart, and the time at which she actually departs. Adaption time is particularly relevant in transit systems that operate with a high frequency where passengers do not consult a timetable prior to departure. Including adaption time into the travel time can, however, also be seen as a means to penalize infrequent connections between highly demanded OD-pairs, as they lead to high adaption times <cit.>. * Access time is the time for walking to the first or from the last station. It can be computed if the demand is given in districts or zones also considering the choice of the first and last station into the route choice decisions, compare <cit.>. It is often neglected in transit line planning. Many line planning papers use travel time or its components as objective function or as service level constraint. Some models minimize a weighted sum of the components, as it has been shown that the value of different time components is different in the passengers' perception, see, e.g., <cit.>. When the sum over all passenger travel times is used as an objective, this is normally combined with requiring that all passengers must be transported, because not transporting a passenger would lead to a reduction in travel time. Besides travel time, coverage is a second important demand-related performance indicator in transit line planning. The definition of coverage boils down to the question of how many passengers of all OD-pairs are going to travel by transit. Many transit line planning models implicitly assume that a system split (see, e.g., <cit.>) is precomputed. That is, the OD passenger numbers that are taken as input do not represent the full amount of people traveling from origin to destination, but the fraction of them that wants to travel by public transport. Based on such a system split, many line planning models require full coverage, i.e., that all passengers of all OD-pairs are transported. When travel time is not considered in the objective function, such models require that the routes provided for the passengers do not deviate too much from shortest routes, i.e., a service level constraint with respect to travel time is introduced to avoid that passengers with large detours, who would in reality probably not take transit, are counted as covered. Direct traveler models consider a passenger covered if there is a line that connects her origin to her destination without a large detour, see, e.g. <cit.>, while in <cit.> passengers may transfer at most once. In contrast to such approaches, in coverage by transit the existence of other modes (e.g. the private car) is recognized. Only if the travel option using transit is good enough compared to the other existing mode, an OD-pair is counted as covered, see, e.g., <cit.> This is often framed as integrating mode choice and will be explained in further detail in Section <ref>. In some models, instead of coverage the related indicator revenue (as a function of coverage) is used. E.g., <cit.> consider revenue from ticket prices, which are proportional to the distance between the passengers' origins and destinations. <cit.> additionally include a subsidy per transported passenger into the revenue estimation. An advantage of using revenue is that it can easily be aggregated with the supply-aggregated indicator cost to obtain profit as an objective function. Also comfort-oriented indicators are occasionally used. To ensure passenger satisfaction, besides a criterion on the travel time, <cit.> require that the buses are not filled beyond a certain level. <cit.> include deteriorating comfort in the case of overcrowding into their simulation-based evaluation of line plans and corresponding passenger routes. In contrast to most supply-centered indicators that can be attributed to individual lines, most of the demand-centered indicators can be decomposed per passenger, but cannot be decomposed per line. Therefore, it is difficult or impossible to apply them to assess individual lines. A possible approach to evaluate individual lines from a demand perspective is to consider the marginal benefit that the addition of a line has on an existing line concept, compare <cit.>. § MODELING PASSENGERS AND CAPACITY In the previous section we have shown how lines and their frequencies are defined and can be modeled. Looking at the demand-centered performance indicators, it became clear that the routes for the passengers are crucial for defining coverage and travel time indicators. Moreover, the routes of the passengers are important to evaluate if the capacity provided by a line is sufficient to transport all of them. We first describe three levels of detail in which passenger routes can be modeled and aggregated: link-based, line-based and trip-based. We then show for each of these aggregation levels how the capacity and the passenger routes are connected in Section <ref> and how travel time can be estimated in Section <ref>. We finally classify the various models for determining the passenger routes in Section <ref> and end with some remarks on the mode choice. §.§ Modeling passenger routes in three levels of detail We start by describing three levels of aggregation which are common. For all of them, let be the OD-matrix, be the set of OD-pairs with _s,t > 0 for s,t ∈ V and let Q_l be the capacity of vehicles operating on line l ∈. Passenger routes on link level. Here, passenger routes _st are determined in the PTN, that is, a route is a sequence of links in the PTN. The link level is the least detailed level. It neglects information about the lines the passengers use. Passenger routes on line level. Here, we determine the routes of the passengers in the PTN together with the lines to be used. This can be efficiently done in the Change & Go network CGN, originally introduced in <cit.>. See Figure <ref> for a small public transport network and its corresponding Change & Go network. The change-and-go network (CGN) =(,) consists of nodes which represent station-line pairs for each line l and all the stations v it passes: ={ (v,l): v ∈ V, l ∈ v ∈ l } The arcs set consists of transfer (=change) arcs and driving (=go) arcs ={ (,l):=((v,l),(u,l)): =(v,u) ∈ A, v,u ∈ l}∪{((u,l_1),(u,l_2)): u ∈ l_1 ∩ l_2} which represent driving activities of (,l) of line l on arc ∈ A and possible transfer activities for passengers between lines l_1,l_2 in station u. For routing an OD-pair (s,t), the CGN is extended by adding (s,0) and (t,0) as nodes and connecting (s,0) to all nodes (s,l) ∈ and (t,0) to all (t,l) ∈. The arcs in the CGN are labeled by an estimate of the travel time for the respective driving or transfer activity that allows a shortest path computation for determining passenger routes. Note that the CGN can only be built if a line pool is available. A route _s,t from (s,0) to (t,0) in the CGN represents a path which not only contains the sequence of stations but also the lines to be used. There exist a number of similar network models, compare, e.g., <cit.>. Some papers use line-based aggregation levels without the CGN, e.g. <cit.>. Passenger routes on trip level. A line l with a frequency of let's say f_l=4 consists of four trips per period. Let T_l be the trips belonging to line l ∈. To estimate travel time and capacity even more accurately than on the line level, we want to specify not only the routes but also the trips which are used on the passengers' journeys. This is possible if line planning is integrated with scheduling decisions. In this case, passengers can be assigned to the specific trip to be used and not only to the line. The exact trips a passenger uses can be computed as a route _s,t in the so-called event-activity network (EAN) which contains events for each arrival or departure of every trip at every station, together with their accurate arrival and departure times. Transit line models including timetabling and passenger routing on trip level exist, e.g., the single corridor frequency assignment model by <cit.>, or the model integrating line planning, timetabling, and passenger routing by <cit.>, but they are computationally hardly tractable. §.§ Modeling capacity While line generation models do usually not consider capacity considerations, providing sufficient capacity is normally considered in line planning and in frequency setting. The two decisions that impact the overall capacity of line l ∈ per period are the capacity Q_l of vehicles serving the line and its frequency f_l. In many models, the vehicle capacity Q_l is considered to be given and identical for all lines. Some models allow to choose among different line types with vehicle capacity being one distinguishing property of different lines (besides, e.g., speed), or different vehicle sizes can be associated to different lines, see, e.g., <cit.>. The main difference between models for transit line planning comes from the level of capacity aggregation used, which should be aligned with the level of detail used for the passenger routes. The joint determination of line frequencies and vehicle capacities is common in frequency setting, as both, increasing frequencies and increasing capacities, are means to adjust line capacity to passenger demand. While in link-based and line-based approaches, all vehicles of the same line have the same capacity <cit.> it may make sense in trip-based models to assign vehicles of different sizes to the same line, to cater for fluctuations in demand, compare <cit.>. In the following we formulate the relation between frequency, capacity and demand for the three aggregation levels link-based, line-based and trip-based. Link-based capacity aggregation. For the link-based aggregation level we use the links of the PTN defined in Notation <ref>. In this level of detail we can aggregate the frequencies of all lines for each specific link and require that the capacity on the link is sufficient for the passengers that use this link, independent of the specific lines they take. Constraint ∑_l ∈: a ∈ l f_l · Q_l ≥∑_(s,t) ∈: ∈_s,t_s,t=d_a ∈ A makes sure that the demand is covered on every link ∈ A of the PTN when solving the line planning problem. The cost model (<ref>) is an example which uses the link-based aggregation. Line-based capacity aggregation. The link-based aggregation neglects which lines passengers use. Even if the capacity on a link ∈ A is large enough to transport the demand d_a it might be possible that not all passengers can use their preferred lines. To ensure that the capacity per line is sufficient to transport all passengers we use routes _s,t in the CGN and receive f_l · Q_l ≥∑_(s,t) ∈: (,l) ∈_s,t_s,t=:d_,l∈ A, l ∈ making sure that the number of passengers using line l on arc ∈ A does not exceed the line capacity f_l· Q_l on every arc ∈ A of the line, compare, e.g., <cit.>. In other words, it is ensured that over the whole period T, capacity on each line l is sufficient to transport the passengers assigned to the line. (<ref>) also confirms that no demand can use a line l ∈ which is not selected, i.e., l ∉. Trip-based capacity aggregation. When passenger routes are detailed on trip level, capacity can be modeled on a trip level as well. That is, for each arc ∈ A and each trip operated on we count the passengers that are assigned to this trip and bound their number by the number of places in the vehicle. With T being the set of all trips, we receive Q_l ≥∑_(s,t) ∈: (,l,) ∈_s,t_s,t=:d_,l, l ∈, ∈ T_l, ∈ A, The following example illustrates why trip-based capacity modeling may be desirable. Illustration. We illustrate and compare the three aggregation levels in a small example. Consider the network with three stations, s_1,s_2, and s_3 and three lines ={l_1,l_2,l_3} depicted in Figure <ref>. Assume that 120 persons wish to travel from s_1 to s_3. We want to evaluate a solution in which the lines l_1 and l_3 operate with a frequency of f_l_1=f_l_3=1 and capacities of Q_l_1=Q_l_3=60 while line l_2 runs with the double frequency of f_l_2=2, but only uses smaller buses with half of the capacity, i.e., Q_l_2=30. * In the link-based aggregation level we determine the traffic loads of both links _1=(s_1,s_2), _2=(s_2,s_3) ∈ A as d__1=d__2=120. We receive ∑_l ∈: _1 ∈ l f_l Q_l = 60 + 60 ≥ 120=d__1, ∑_l ∈: _2 ∈ l f_l Q_l = 60 + 30 + 30 ≥ 120=d__2, so on a link-based aggregation the solution is feasible with respect to (<ref>) although we may expect overcrowded buses on line l_3 since most passengers would prefer a direct connection over a connection with a transfer. * The line-based aggregation would either route all passengers along line l_3 yielding an infeasible solution since capacity of l_3 is only limited to 60 persons. With a capacity-aware routing model, 60 persons would use the path _1=((s_1,l_3),(s_2,l_3),(s_3,l_3)) and 60 persons would be assigned to path _2=((s_1,l_1),(s_2,l_1),(s_2,l_2),(s_3,l_2)). With these two paths, the capacity constraints (<ref>) are satisfied, since f_l_1 Q_l_1 = 60 ≥ 60= d__1,l_1, f_l_2 Q_l_2 = 60 ≥ 60= d__2,l_2, f_l_3 Q_l_3 = 60 ≥ 60= max{ d__1,l_3, d__2,l_3} and the solution would be classified as feasible. However, the 60 passengers arriving at station s_2 will all try to use the same bus of line l_2 (namely the first one that arrives) leading to an overcrowded bus — while the next bus of line l_2 would be not used at all. * The trip-based model assigning passengers to the individual trips is the only of these three which reflects capacity correctly in this situation: For the trips of lines l_1 and l_3 we see that their capacities are sufficient to transport all passengers assigned to them in one trip. However, there are two trips (let's say _1 and _2) belonging to line l_2. All 60 passengers arriving at station s_2 with line l_1 will board the first of these two trips, say _1. Then we have Q_l_2=30 < 60=d__2,l_2,_1, i.e., (<ref>) is not satisfied. §.§ Estimating passengers' travel time Estimating travel time is key to most demand distribution models (compare Section <ref>): the travel time of a certain route determines how many passengers take it, and the amount of passengers on a route in turn determines whether capacity constraints are satisfied (compare Section <ref>). Travel time and its components are also relevant performance indicators to evaluate a line concept, and, as such, have been described in Section <ref>. This section examines how travel time can be estimated based on the level of detail used. Estimating travel time in case of trip-based aggregation. An exact estimate of planned travel time can be given in presence of a timetable. If there are no delays, the departure time of the first boarded vehicle at the origin and the arrival time of the last vehicle of a route at the destination can be read from the timetable. Sometimes, even denied boarding is modeled and accounted for in the transfer and adaption time, compare <cit.>. If the desired departure time of a passenger was known (normally, this is not the case in line planning), also adaption time could be computed exactly. Otherwise, it must be estimated, e.g., as in (<ref>). Estimating travel time in case of line-based aggregation. When passenger routes are modeled line-based, driving time can be estimated based on line speed and distance and assigned to the arcs of the CGN. In a few models for headway-based transportation, road congestion is explicitly modeled. E.g., <cit.> allow passengers to choose between different modes, where both, bus and car traffic, contribute to crowded streets. Sometimes, waiting time is included in the in-vehicle time as a constant supplement to the driving times, <cit.> assume that waiting time depends linearly on the number of passengers already in the vehicle (modeled as passengers assigned to the line on that link divided by the frequency), but usually the vehicles' stopping times are considered as marginal and hence not included in the travel time. In absence of a timetable, transfer times cannot be predicted exactly, but need to be estimated. To assign transfer time as label to the CGN, the transfer time can be approximated either by using a fixed penalty term <cit.> or by using the frequency of the departing line. An approximation for the transfer time in the line-based model is the average expected transfer time T/2 · f_l. The same formula can be used for the adaption time. When using binary indicator variables, this is equivalent to ∑_ϕ∈ΦT/2ϕz_l^ϕ, compare, e.g., <cit.>). <cit.> follow the same approach when the frequency is high (more than 6 per hour), but argue, that for low frequencies passengers will mind the timetable to avoid adaption time and transfers will be synchronized. Therefore, for lower frequencies they assume a transfer and adaption time of 5 minutes. Interestingly, a similar result also holds in the absence of regularity constraints: <cit.> prove that in the case of k periodically scheduled but un-synchronized (i.e.: irregular) vehicle departures the expected adaption/waiting time to board one of them is T/k+1, which even holds for the case of irregular departures. Estimating travel time in case of link-based aggregation. Routes that are modeled on link-based aggregation level are paths in the PTN and hence do not encode which lines are taken. Most models (see, e.g.,<cit.>) assume constant line speed so that driving times do not depend on the chosen lines and can hence be estimated without knowing the lines used by the passengers. Transfer times are mostly neglected in the link-based aggregation, see, e.g., <cit.>. A possibility is to distinguish between direct routes and routes with at least one transfer and penalize the latter ones <cit.>. In some direct traveler models, routes in the PTN are associated with the fastest line (or just any line) that covers the route, and a passengers' travel time is given between the passenger's origin and destination, compare, e.g., <cit.>). In link-based aggregation, adaption time is normally not included. §.§ Modeling passengers' route choice To determine travel time or to evaluate the capacity constraints (<ref>), (<ref>), or (<ref>) in the different aggregation levels, passenger routes need to be determined. To describe the many route choice models to do this, we use the structure shown in the decision tree in Figure <ref>. We first distinguish between approaches that distribute passengers over the network prior to the optimization step and models that aim to integrate the route choice of the passengers into the optimization step. Passenger routing prior to transit line planning (node 1) In order to distribute OD-demand prior to the construction of a line concept, shortest-path algorithms or other assignment models, see, e.g., <cit.> on the PTN can be used. The predetermined passenger distribution can be used to compute traffic loads on each link of the PTN, see (<ref>). This is a common approach, e.g., used in <cit.>. Other models use precomputed PTN-paths for the OD-pairs that need to be enabled in order to travel, see <cit.>. Passenger routing during transit line planning (node 2) We now turn to models that do not consider the passenger routes as given, but acknowledge that they depend on the transit line planning decisions to be made in the optimization. We distinguish between optimization models where passengers are assigned to routes by a central decision maker (node 2.1) and models that assume that passengers choose a route by solving their own optimization problems (e.g., a shortest route with respect to travel time). Route assignment assignment by planner (node 2.1) We first review models that aim at a system optimum, i.e., the sum of travel times (or any other performance indicator) is to be minimized by a central decision maker who makes both decisions, on the supply (lines and frequencies) and on the demand (passengers routes). The decision maker optimizes her objective function which is not necessarily fully aligned with the passengers individual objective functions. I.e., such models do not forbid to assign individual passengers to routes that are suboptimal from the perspective of the individual passenger, if this benefits the overall objective function. Since in this case, transit line planning reduces to a single level problem, it can be often approached by integer-programming. Other publications explicitly choose passenger routes during the optimization process and will be reviewed below. For doing so, some rely on a precomputed candidate set of routes for each OD-pair, while others use flow constraints. Flow-based passenger modeling (node 2.1.1) Flow-based models describe the passenger routes as multi-commodity flows between origins and destinations. This approach can be applied on all aggregation levels for passenger routes (compare Section <ref>). That is, in the description that follows, the term network can refer to the PTN (e.g., in <cit.>), CGN (see, e.g., <cit.>) or EAN (see, e.g., <cit.>). Such models use flow variables x_^ that indicate how many passengers of an OD-pair travel on an arc of the network. Flow constraints are modeled as θ x^(,)=^ ∀∈ where θ represents the node-arc incidence matrix of the network and ^(,) is the number of passengers on OD-pair . Models that use passenger routing with flow constraints therefore minimize total (or, equivalently, average) travel time. Note that passengers may be assigned to routes with long travel time if this benefits the objective function of the model. In order to avoid this, one can impose bounds on the maximal travel time for each OD-pair, depending on the geographical distance of origin and destination or on the length of a shortest route, see, e.g., <cit.>. Path-based passenger modeling (node 2.1.2) As alternative to flow-based models, some models precompute choice sets of routes on link or line level for each OD-pair , and assign the passengers of to one or several of these routes during the optimization step. In most models, only routes of a certain quality, e.g., that do not exceed the length of a shortest path by too much, are included in the choice set. From a modeling perspective, this approach has the advantage that passengers cannot be assigned to 'bad' routes for the sake of the social optimum, even if the objective function minimizes not travel time, but, e.g., costs. On the other hand, route choice is more constrained in these models, in particular if the choice set is small. To overcome this problem, <cit.> use a column generation approach to iteratively generate suitable OD-paths in the PTN. A challenge is that in particular in dense line pools, the number of routes with good travel time for an OD-pair can be very big. In the CGN this may happen even if there is only one path from to in the PTN. Therefore, preprocessing the choice set is a promising approach. E.g., <cit.> restrict to routes with only one transfer and set tight bounds on the travel time. On the contrary, <cit.> consider an even bigger choice set where each CGN path corresponds to multiple elements - one for each possible frequency combination for the lines used on the CGN path. This allows to precompute travel time including frequency-dependent adaption and transfer times. Passenger route choice (node 2.2) We now review approaches that explicitly model route choice of passengers. Such models are often represented as bilevel optimization models with transit line planning decisions (route generation, selection, and frequency setting) on the upper level and passenger routing on the lower level. Solution approaches for these kind of models can be roughly categorized as follows: The first class of approaches computes transit line planning decisions and routing simultaneously, e.g., by solving a mixed-integer program that encodes both transit network design decisions and decisions on passenger routes in variables of the same problem. The other class of approaches follows the hierarchy suggested by the bilevel representation and takes transit line planning decisions first, and routing decisions second. In many heuristics and metaheuristics this process is iterated, interpreting the computation of a routing for a given line concept as an evaluation mechanism and using it to guide the solution process. The first approach has the advantage that (if the resulting models can be solved to optimality) a global optimum is found, while the second approach can get stuck in local optima. On the other hand, the routing models of the first approach need to be simple enough to be used within a mixed-integer linear program, while the stepwise treatment of line planning decisions and routing in the second approach has the advantage, that line planning decisions are already fixed when a routing is computed, thus more sophisticated routing models can be used. We distinguish further by the type of problem considered on the lower level. Independent route choice (node 2.2.1) We first turn our attention to models where the routing problem of each passenger can be formulated as an optimization problem based on the upper level decisions, but independent of the choices of other passengers. Here, we distinguish between models where the solutions to the lower level of optimization problems for an individual OD-pair are routes, and models where lower level solutions are strategies. Passengers choose single paths (node 2.2.1.1) These models are motivated by the consideration that, as soon as the decisions on line routes and frequencies (and possibly a timetable) are taken, transfer times and adaption times can be computed as described in Section <ref>, and finding a travel-time minimal line-based (trip-based) route from an origin to a destination corresponds to solving a shortest path problem in the CGN (or the EAN). If no capacity constraints are considered, and if the upper level objective is the sum of the individual OD-pair objective functions of the lower level optimization problems, in a social optimum all passengers are assigned to individually optimal paths, and thus, the bilevel problem can be transformed to a single level problem with route assignment by planner as described in node 2.1, see <cit.>. However, in general this does not hold. <cit.> remark that in the presence of capacity constraints, the passenger routes estimated by flow constraints may not follow shortest paths. They argue that this may lead to capacity violations in reality, when passengers do behave rational and choose shortest routes instead of following the routes predicted by the model. Therefore, they propose to model transit line planning as a bilevel program, where the operator's objective constitutes the upper level and passengers' route choice (using line-based aggregation) constitute the lower level. By introducing additional shortest-route-indicator variables, or by dualizing the lower level problem, the bilevel problem can be transferred to a mixed integer program, which is, however, hard to solve is followed in the route-selection based frequency setting model by <cit.> where the authors use an optimal value function to assign passengers to the shortest path that is enabled by the line concept. Passengers choose hyperpaths (strategies) (node 2.2.1.2) The above-described passenger routing models have in common that they use (adaption and) transfer time estimates (either frequency-based, fixed penalties, or no transfer time) in order to compute travel-time minimal routes and then assign passengers to these routes. Unfortunately, the travel time of a route may deviate considerably from the estimates made. For this reason, some authors argue that assignment of OD-pairs to routes is not the best way to estimate passenger routes choice in line planning. The approach that they propose is motivated by a routing paradigm for headway-based transportation, described first as the common lines problem by <cit.>: When several lines could be boarded to go from origin to destination (possibly with transfers), in absence of a timetable a passenger would not decide prior to departure which line to take, but make her decision depend on, e.g., which line arrives first. That is, instead of choosing a route prior to departure, passengers choose a strategy based on the line plan, i.e., a set of rules that the user will apply in an online fashion to travel through the system, see <cit.>. This strategy can be represented as an acyclic directed graph or hyperpath in a suitably chosen network (compare, e.g., <cit.>), where each node with more than one outgoing arc represents a decision on how a passenger will continue her route. Based on line frequencies, probabilities for each OD-path in the network can be determined which allow to compute the expected travel time of a strategy by solving a linear program, see <cit.>. This route choice approach has been integrated as lower level problem in a bilevel frequency setting approach <cit.>, which has been transformed into a mixed-integer program in <cit.>. <cit.> use the computation of optimal strategies for the passengers as fitness function in a metaheuristic for the TNDFS. Interdependent route choice (node 2.2.2) Lastly, there are models where the route choice of individual passengers does not only depend on the line plan, but is also impacted by the route choice of other passengers. <cit.> use a bilevel model where on the lower level they optimize all the passenger routes based on strategies simultaneously. There is also a number of publications that formulate transit line planning with equilibrium constraints on the lower level. To give two examples, <cit.> formulate a joint frequency and bus size assignment problem as bilevel optimization problem. They apply a metaheuristic where, on the lower level, they use a transit assignment model proposed by <cit.>, which leads to equilibrium constraints for passenger flow modeling. A similar model is employed in <cit.> for joint stop location and frequency setting. Within a genetic algorithm, <cit.> use an equilibrium model where passengers can choose among three ways of traveling; by car, by public transport in conventional vehicles. <cit.> propose a model for transit line planning with the full pool where passengers choose their (line-based) routes based on a weighted sum of travel time and distributed operational cost, i.e., when a passenger chooses a previously unused line or frequencies are increased, this is more costly than choosing a route that uses already established lines and their frequencies. They sequentially apply this routing strategy in a best-response approach to find a line concept. A similar approach is applied in <cit.> for line generation. <cit.> use agent-based simulation to model lower-level route choice in a frequency assignment problem. §.§ Mode choice While many line planning models assume that the system split is precomputed and that passenger demand is inelastic in the sense that the OD-matrix is given and fixed and represents all passengers who travel by transit, there are models that recognize that passengers are sensitive to the quality of transit and aim to include mode choice in transit line planning. Some papers propose an all-or-nothing assignment: If and only if the offered connection from origin to destination is better than an alternative mode, passengers choose it. This includes the models which maximize coverage by transit as performance indicator: Here, a passenger uses transit if the travel time is smaller than a (maybe fictive) travel time in an alternative mode, e.g., the private car, see, <cit.>. A link-wise all-or-nothing assignment is used in <cit.> studying frequency assignment under pandemic-imposed distancing measures. They allow to split passenger flows into passengers traveling on the line segment, and passengers bridging the distance covered by the line segment by other means. In order to estimate the number of passengers using public transit, also logit models based on travel time have been used. <cit.> use a piecewise linearization of the logit function. Also <cit.> use a piecewise linearization of the logit function for route choice within a very detailed modeled transit line planning problem. <cit.> formulate mode choice for three alternative modes (bus, car, and walking) using the logit model, for a frequency setting that integrates not only decision on bus capacity, but also on fare, congestion toll, fare collection system, and bus boarding policy on a linear network. They do not linearize their formulation but solve it directly using sequential quadratic programming. <cit.> consider three options to provide transport along a link: public transport with regular vehicles, public transport with green vehicles and transport by car, and employ an equilibrium model for route and mode choice. <cit.> use a logit model for locating a single line and solve it with a greedy approach. Others argue that besides travel time, the number of connections per period plays a relevant role in attracting passengers to travel by public transit. <cit.> argue that, in a reasonable line plan, in-vehicle time and transfer time on routes between OD-pairs is negligible, and thus, the utility of using public transit depends only on the number of reasonable routes offered between origin and destination per hour. Also, in their direct traveler model, <cit.> assume that the percentage of passengers of an OD-pair choosing public transit depends on the frequency of the connecting line offered. Assuming that public transit demand depends only on the quality of the best available route bundle, <cit.> are able to precompute the OD-demand. § TRANSIT LINE PLANNING UNDER UNCERTAINTY The goal of uncertain optimization in transit line planning is to find a line concept which is still good under variation of input parameters. We discuss the following sources of uncertainty. * Uncertainty of the demand, i.e., how many passengers wish to travel (see Section <ref>), * unforeseen link failures in the PTN (see Section <ref>), or * uncertainty of the driving times which may vary due to congestion, weather conditions, or unforeseen delays (see Section <ref>). Each realization of an uncertain parameter is called a scenario and the set of all scenarios is called the uncertainty set . Sometimes, a probability distribution on is known. Further aspects of uncertainty can be considered, e.g., <cit.> model competition between railway operators which do not reveal their real incentives. Optimality concepts. Uncertain optimization can be treated by robust and stochastic optimization (see <cit.>). They differ in the optimality concepts they follow. One-stage robust optimization (e.g., <cit.>) hedges against the worst case. It hence aims for a solution which is feasible for every scenario in the uncertainty set and performs best in the worst case over all scenarios. The concept is also called strict robustness. Two-stage robust optimization splits decisions into here-and-now decisions of the first stage which have to be made without knowing the scenario and wait-and-see decisions in a second stage. The latter can be taken after the scenario is revealed. The goal is to find here-and-now decisions that can be recovered to a good solution even in the worst case over all scenarios. These optimality concepts include adjustable robustness (e.g., <cit.>) and recovery robustness (e.g., <cit.>). One-stage stochastic optimization <cit.> assume that a probability distribution on the scenario set is known. This allows to minimize the expected value of the objective function. Two-stage stochastic optimization <cit.> considers a second stage in which recourse variables reflecting wait-and-see decisions can be chosen after the scenario has been revealed. Several goals may be considered, among them the minimization of the expected value. Uncertainty sets. We introduce uncertainty sets which have been used in transit network design. For illustration, consider a problem in which two parameters, (l_1,l_2) are uncertain. These may represent uncertain driving times on two links a_1, a_2. * In the deterministic case the values of l_1 and l_2 are known and the uncertainty set contains only one point, e.g., ={(8,15)}. * A discrete uncertainty set consists of a finite set of scenarios. These could be driving times under three different traffic situations, e.g., ={ (10,20), (9,17), (8,15) }. Probabilities may be assigned to each of the scenarios. * If parameter ranges can be specified independently for each parameter, we obtain box-wise uncertainty. If the driving time l_1 on link a_1 is in [8,10] and l_2 ∈ [15,20] we receive := [8,10] × [15,20]. A typical distribution on (reflecting measurement errors) may be the multi-normal distribution. * Γ-uncertainty introduced by <cit.> takes into account that in the case of box-uncertainty it may be unlikely that all uncertain parameters take their extreme values simultaneously. In the example, it might be unlikely, that both driving times take their worst case at the same time. This is reflected by a budget constraint with budget Γ, leading, e.g., to ={(l_1,l_2): 8 ≤ l_1 ≤ 10, 15 ≤ l_2 ≤ 20, l_1 + l_2 ≤ 25 =:Γ}. Uncertain optimization generates challenging optimization problems, and many of them may be practically relevant. Nevertheless, research on transit line planning under uncertainties is still scarce. In the following we sketch how some of the introduced optimality concepts have been applied to transit line planning. §.§ Transit line planning under demand uncertainty Most publications assume that transit demand, as given in the OD-matrix, is given and fixed, or that its dependence on the supply provided is deterministic (compare Section <ref>). However, since exact predictions about future demand are hard to make, the OD-matrix should be considered as an uncertain parameter. In the papers sketched below, the goal is to design a line concept which still can transport all passengers even if demand increases. Box-wise uncertainty _box={: l_s,t≤_s,t≤ u_s,t ∀ (s,t) ∈} for given lower and upper bounds l_s,t,u_s,t for all OD-pairs (s,t) and Γ-based uncertainty _Γ={∈_box: ∑_(s,t) ∈_s,t≤Γ} have been used. One-stage robust optimization. <cit.> consider a strictly robust approach to transit line planning in which they assume Γ-uncertainty. They search a line concept and passenger routes that minimize the travel time of the passengers including a penalty for overcrowded lines in the worst case of all OD-matrices in _Γ. They use a line-based aggregation and model both, route assignment by the planner and route choice by the passengers. One-stage stochastic optimization. <cit.> study a single-stage stochastic version of the subline frequency setting problem, i.e., the problem to select lines and frequencies to run on a corridor. In a deterministic version of the problem, frequencies have to be chosen high enough to allow all passengers to board the next-arriving vehicle towards their destination, however, in the stochastic case this constraint may be violated and capacity violations lead to a penalty. They solve the problem by scenario sampling from a known distribution of the demand. Two-stage robust optimization. Box-uncertainty has been considered in <cit.> and in <cit.>. Both groups of authors minimize a sum of costs and travel time in the worst-case over all possible scenarios in the line-based aggregation level. They use two-stage models where lines and frequencies are determined in the first stage before knowing the scenario, while passenger routes are adjusted to the realized scenario in the second stage. In <cit.> two models are discussed. In the first one, passenger routes are assigned by a planner under hard capacity limits of the lines. In the second model, line capacity violations are allowed, but lead to discomfort for the passengers who choose their routes interdependently according to an equilibrium model. In <cit.>, passenger routes are assigned by the planner. Here, additionally to the adjustment of the passenger routes, the authors allow to also adjust the routes of the lines on a limited number of links. Their objective function also includes the travel time and costs in the first stage. In both papers, the problems where passenger routes are assigned by the planner can be solved by using the worst case in which all OD-pairs take their maximum values. For the model with equilibrium routing presented in <cit.>, the authors observe a paradox: it may happen that maximum demand does not constitute the worst-case scenario. Two-stage stochastic optimization. <cit.> propose two-level stochastic optimization models to transit line planning with uncertain demand with box uncertainty. They minimize the expected cost and travel times assuming a known probability distribution of the passengers' demand which is made tractable by sampling approaches. As in their robust optimization approaches described above, the first stage concerns the decisions on lines and their frequencies and the second stage allows to adjust the passenger routes within a line-based level of detail when the scenario is known. They analyze the situation where passengers are assigned to routes by the planner as well as routing passengers interdependently with equilibrium flows assuming capacity violation penalties. Additionally, the second stage does not only adjust the routing variables but also allows to add flexible service (such as dial-a-ride) on higher costs for passengers that cannot be transported by the transit lines. §.§ Transit line planning under the risk of link failure If a link in the PTN fails, some passengers might not be able to travel any more or they might suffer detours. The goal in transit line planning is to find a line concept which keeps the number of badly affected passengers low even if links fail. The uncertainty set is discrete in this setting since there is only a finite number of links to fail. If only the failure of single links is considered we have =A. We first discuss how the consequences of link failures can be evaluated. Link failures are a typical disruption in classic network design problems and have also been investigated in transit line planning. While many papers evaluate networks from a pure topology point of view (e.g., <cit.>), <cit.> were one of the first who included the demand in a line-based evaluation. For the failure of a link a they propose two models: one in which passengers need to avoid the link, i.e., they compute new (shortest) paths with an increased travel time. The second model assumes bridging the failed link by a bus service which also increases the travel times. A full-scan of all edges is performed and the maximum of them is used as robustness index. Also <cit.> suggests a full-scan of all links, but on a link-based level. In case of a failure, they distinguish between passengers without any path, passengers which need a detour and undisturbed passengers. One-stage robust or stochastic optimization. If passenger routes cannot be adjusted, failure of a link will destroy some passenger paths (unless the link would not be used at all). The worst case would be attained at the link with the highest traffic load. However, in real life passengers would adjust their routes which makes one-stage optimization concepts for link failures useless from a practical point of view and might be the reason that they have not been considered for link failures in transit line planning. Two-stage robust optimization. In a two-stage approach, the lines and their frequencies are set in the first stage while the routes of the passengers can be adjusted in the second stage when it is known which link has failed. This concept has been followed in <cit.>, who minimize the passengers' traveling times under link failure in the worst case. The authors analyze travel time in the link-based aggregation level assigning passengers to routes by the planner. The same setting is used in <cit.> but with coverage by transit in presence of an alternative mode as performance indicator, i.e., they aim to find a line concept that maximizes the coverage of passengers in the worst case. Since the uncertainty set with :=A is small, the authors can formulate an integer linear program which contains paths for all OD-pairs and all scenarios. They show that it is equivalent to a two-player non-cooperative zero-sum game in which the first player is the transit line planner which can choose from different line concepts in his strategy set while the second player is an evil adversary who selects a link to fail. The authors prove that a saddle point of the game (if it exists) corresponds to a (strictly) robust solution of the two-stage robust transit line planning problem. Two-stage stochastic optimization. Given that probabilities for the link failures are available, <cit.> maximize the expected coverage of the line concept. Passenger routes are determined (by the planner) in the second stage in the link-based aggregation level, and may be adjusted to the link which has failed. Analogous to their approach for minimizing the worst case, and due to the rather small scenario set =A the resulting problem can be formulated as linear integer program. §.§ Transit line planning under driving time uncertainty Uncertainty in the driving times may, e.g., be due to varying traffic, or due to unforeseen delays. Hedging against travel time uncertainty has been considered in timetabling in numerous papers, see <cit.> for a survey. Transit lines have an impact on the timetable and hence on its robustness. In order to evaluate a line concept with respect to the traveling times of the passengers in schedule-based transit, a simulation approach can be used that considers not only the transit lines but also the timetable and the vehicle schedules as follows: * Based on the line concept, design a timetable and a vehicle schedule (as in Figure <ref>). * Generate a set of (typical) driving times, including unforeseen delays. * Route the passengers in the trip-based aggregation level for every scenario in . * Aggregate their traveling times to, e.g., the expected or the worst case traveling time. LinTim (see <cit.>) is a tool that can be used for such an evaluation as done in <cit.>. A systematic evaluation of a large set of possible disruptions has been suggested in <cit.>. For Step 2, the authors generate delays on each single train run, a travel time increase on each link, and a temporary blocking of each station. For each of these scenarios the additional delay encountered by the passengers is computed and summarized in a robustness value. The evaluation has been used within a machine learning approach for improving the robustness of timetables in <cit.>. In the following we turn to analytical models for dealing with travel time uncertainty. Two-stage robust optimization <cit.> use a model for the TNDP where passengers are assigned to routes by the planner in the link-based level of detail and optimize a weighted sum of coverage by transit, costs and passengers' traveling times. As uncertainty, a finite set of disruptions caused, e.g., by maintenance work is considered in a two-stage program. Transit lines are constructed in the first stage while the passenger routes are adjusted in the second stage (in the paper this is called recovery action). The aim is to find the best set of transit lines in the worst case. Two-stage stochastic optimization <cit.> assume a log-normal distribution of driving times between stops. They sample their values from the scenario set within the optimization. The goal is to minimize the expected travel time of the passengers. Their routes are determined by the planner on trip-level together with a scenario-based schedule and under consideration of crowding in the second stage. The mean and the standard deviation needed are assumed to be given, but analyzed within a sensitivity analysis. <cit.> modify the deterministic model of <cit.> by making the driving times stochastic. They use a finite scenario set . The aim is to minimize the expected value of coverage by transit as performance indicator, where the lines are determined in the first stage and the routes of the passenger are adjusted by the planner in the second stage within the link-based aggregation level. Due to the finiteness of the uncertainty set the problem can be formulated as a linear integer program. The results are compared with the Expected Value model and two different Value-at-Risk formulations. There are also other approaches to increase robustness of line plans under travel time uncertainty. Some authors argue that an equal distribution of passengers through the network may decrease the probability for delays since crowded arcs tend to be more sensible to delays and since a disruption will affect less passengers in the worst case if there are no overcrowded arcs. This has been done in <cit.> where constraints on the passenger routes make sure that demand is forced to disperse on the whole network. Instead of distributing the passenger routes, <cit.> aim at dispersing the lines more equally on the network within a game-theoretic model: here, each line is interpreted as a player, and the players compete for the limited infrastructure when choosing their frequencies. Both approaches work on the link-based aggregation level. <cit.> add buffer times to the travel times in the transit line planning model to account for potential delays In <cit.>, a transit travel time reliability constraint is introduced within a stochastic model requiring that the probability that the travel time for an OD-pair is larger than a certain threshold is small. § EXTENSIONS AND RELATED PROBLEMS §.§ The skip-stop problem So far, lines have been defined as paths in the public transportation network PTN, usually by their sequence of links. This determines also the stops of a line. In the skip-stop problem the lines are given and fixed. The decision to be made is to determine their stops. More precisely, for every line the skip-stop problem decides, which of the stops along its path should be visited by the line and which of them can be skipped. Formally, assume that a line l passes links e_1=(u,v) and e_2=(v,w), i.e., the path for l includes the sequence l=(…, (u,v), (v,w), … ). If the stop v is skipped, the two consecutive links (u,v) and (v,w) are merged into one link (u,w) and we receive l'=(…, (u,w), … ) instead. Note that the PTN needs to be transitive when merging edges which is a realistic assumption since the bus/train can just pass a stop/station without stopping there. For the model, assume a line l which is operated every p minutes. The idea is to split the line into two lines which are operated alternately, let's say into a red line and into a green line. The red line serves only red stations V^⊆ V and the green line serves only green stations V^⊆ V. The set of green stations and the set of red stations does not need to be disjoint. It is required that every station is served by at least one of the two lines, i.e., that V^∪ V^= V. For every station that is not visited by both lines, driving time is saved because the line does not need to stop. The saved time can be used to increase frequency such that more passengers can be transported. This is a reason to skip as many stops as possible since capacity is a crucial issue in crowded metropolitan areas. Skip stop problems in this context have been developed, e.g., in Asia <cit.> or in South America <cit.>. The saved time can also be used as buffer to increase robustness <cit.>. For passengers traveling between red stations or between green stations, the travel time decreases, too. However, passengers who wish to travel between a red and a green station need an additional transfer. Hence, skipping too many stations makes it inconvenient for the passengers that now do not have direct connections any more and might even need to travel a short piece into the wrong direction. In the literature, the term skip-stop is used and analyzed from many different perspectives and there exist several approaches of solving the skip-stop problem considering various objective functions that have to be minimized. <cit.> analyzes the differences between the skip-stop operation and the standard operation with respect to the passengers time savings. Within a parametric approach <cit.> considers a continuum approximation approach focusing on the density of the stations visited by both lines. Evenly distributed demands and identical stopping-times are assumed for analyzing the influence of different parameters of the model, especially the length of the lines. A multicriteria-optimization model considering waiting times, travel times and driving times of the trains is developed by <cit.>. In a suburban area, <cit.> aim at a solution with one local line visiting all stations and one express line skipping some of the stations. In addition to minimizing the total passenger traveling time and the riding times of the vehicles, the total number of used trains (for a given number of passengers) is taken into account. Combining the choice of a stopping pattern with the computation of the concrete timetable has been investigated in <cit.>, and robust models hedging against demand uncertainty for the integrated problem are developed in <cit.>. These papers use rail operations as examples. Demand uncertainty has also been investigated for skip-stop bus operations. Here, <cit.> minimizes the costs taking into account the uncertain boarding times of the passengers, while <cit.> develop robust strategies for skipping stops if a bus is behind schedule in the deadheading problem. A quite different application for skip-stop models is presented by <cit.> who propose a skip-stop model that includes social distancing constraints and enables to plan better schedules while capacities of the vehicles are drastically reduced (due to the COVID-19 pandemic). §.§ Transit network design with seasonal demand Which line concept is to be considered optimal highly depends on the demand. Transit line planning for uncertain demand has already been addressed in Section <ref>. However, demand changes are typical in public transport: at night, there is less demand than during the day, in peak-hours we have high-demand periods. We call these different demand periods seasonal demand. Let us assume that we have K different demand seasons, each of them characterized by an OD-matrix _j, j=1,…,K. When designing transit lines, the two extremes for dealing with seasonal demand are the following: * One might find an optimal line concept (^j, (f_l)_l ∈^j) for each of the OD-matrices j=1,…,K. This would serve each of the demand seasons best, but the resulting line concepts might be very different from each other. * The other extreme is to find only one line concept (, (f_l)_l ∈) which should suit all different demand seasons. This is easy to memorize but most likely too expensive or too crowded. Let us denote f^j:=(f_l)_l ∈^j the vector containing the frequencies of line concept ^j. Seasonal demand and the above two options have already been mentioned in in <cit.>. The authors suggest to find one line concept (^r,f^r) minimizing the expected travel time over all seasons. They call it robust since they additionally require that it should on average over all seasons be better than any of the optimal line concepts (^j, f^j). With F_j(, f) being the travel time of (,f) evaluated for _j they require for all seasons i that ∑_j=1^k α_j F_j(^r, f^r) ≤∑_j=1^k α_j F_j(^i,f^i) where α_j is the length of the season with OD-matrix _j. Based on the observations that demand of the metrobüs in Istanbul is extremely unsteady (see <cit.>) <cit.> are the first to define a multi-period transit line planning problem: they search for a line concept for each of the periods and couple them by resource constraints which make sure that the resources flow from one period to the next. The model is stated in general form starting from the cost model (<ref>) as basis. In <cit.>, the authors take a passenger-oriented point of view and evaluate line concepts under various OD matrices within a case study. They suggest to find a line concept for the morning peak hour (since this is the busiest demand season) and for other seasons to update the frequencies while keeping the same line plan. The model in <cit.> goes a step further. Here, a line concept (^j, f^j) is determined for each of the demand seasons, where coupling constraints ensure sufficient similarity of the different line concepts: ( (^j, f^j),(^k, f^k)) ≤α ∀ k,l=1,…,K Different (dis)similarity measures are discussed and tested on the basic cost model (<ref>): Similarity of line concepts can be evaluated by looking at the differences of the frequency vectors (defined on ), i.e., (^j, f^j),(^k, f^k)):=f^j-f^k which is interesting also for the case in which only the frequencies change and the lines stay the same for all seasons. A simple measure is to investigate only the number of lines that are added or that change. The most promising approach starts by defining a dissimilarity d(l_1,l_2) between each pair of lines l_1,l_2 first and then use these numbers within a transportation problem (similar to the Wasserstein-distance) to compute the similarity of the line concept, see <cit.>. §.§ Integrating transit line planning with other planning stages The main stages for public transport planning are the design of the infrastructure, followed by transit line planning, timetabling, vehicle- and crew scheduling as depicted in Figure <ref>. Most papers concentrate on one of these stages. Proceeding through the stages sequentially, one after another, we receive a solution which consists of lines, frequencies, a timetable, and vehicle- and crew schedules. It can be evaluated using similar performance indicators as described in Section <ref>, but the decisions made in later planning stages allow to make more precise estimates, e.g., travel times can be evaluated trip-based instead of line based and costs for vehicle and crew can be evaluated based on actual schedules. However, proceeding sequentially stage after stage is just a greedy approach and does in general not find the best possible solution. This has been recognized by many authors. Solving the planning stages integratedly would yield an optimal solution but is computationally out of scope. An integer linear program integrating transit line planning, timetabling, and vehicle scheduling is presented in <cit.> but it can only be solved for tiny networks. In the following we sketch some work which considers line planning together with other planning stages. Line planning and timetabling. Most work on integrating transit line planning with another planning stage concerns the integration of line planning and timetabling. There are a few exact models (given as mixed-integer programs), e.g., in <cit.> who model the line selection, frequency setting and timetabling problem respecting capacities and passenger flows, or in <cit.> who choose lines and schedules simultaneously for a line corridor in the context of intercity bus lines. Since such models are hard to solve, different types of heuristic approaches have been proposed. * Iterative approaches switch between transit line planning and timetabling iteratively trying to improve the line concept and/or the timetable in every step. Depending on the submodels used for transit line planning or timetabling, different improvement strategies exist. In <cit.> lines can be broken at stations and re-arranged within a matching problem in order to reduce the number of transfers (but still respecting the timetable). In <cit.> the lengths of the lines are changed in each iteration allowing a more flexible and hence more robust timetable. Within a railway context <cit.> adapt the maximal frequencies in each iteration to deal with infrastructural limitations while <cit.> (also in a railway context) adds a coupling constraint whose level of satisfaction is used within a feedback loop in the iterative process. * A look-ahead strategy is followed by <cit.> who aims at finding a line plan that admits a feasible timetable. Within an iterative approach a small set of incompatible services is identified and modified until a feasible timetable exists. * An evolutionary approach for integrating transit line planning and timetabling is suggested in <cit.>. Line planning, timetabling, and vehicle scheduling. Exact formulations for an integration of these three planning stages are given in <cit.> as linear integer programming models using trip-based aggregation and routing passengers by the planner. Since it is computationally hard to solve these models, several other models and approaches have been developed. * <cit.> deal mainly with timetabling integrating vehicle scheduling and allowing slight rearrangements of lines within their approach. * <cit.> proposes to re-optimize all three stages step by step within an iterative approach. As framework, the so-called eigenmodel <cit.> is developed. Its nodes represent subproblems and the edges indicate which of these problems can be solved after another. In <cit.> it is shown that the approach always converges to a local optimum in a finite number of improvement steps for public transit planning. <cit.> use this idea by changing the order of the subproblems in a sequential approach: they start with vehicle scheduling followed by timetabling and line planning instead of processing in the classic way indicated in Figure <ref>. * A look-ahead approach is followed by <cit.> who develop a line plan together with a timetable integratedly and in such a way that it is a feasible input for the vehicle scheduling problem. This is modeled as a large nonlinear integer optimization problem which is solved by decomposition approaches. Integrating transit line planning with other stages. The integration of transit line planning and vehicle scheduling (without looking at a timetable, but just forming the vehicles' routes) has been treated in <cit.>. Also the integration of transit line planning with tariff planning has been considered, see <cit.> or <cit.> who combine frequency setting and the determination of the bus sizes with the optimization of fares and congestion tolls. §.§ Parametric transit line planning There is a stream of literature that regards line planning and frequency setting from a continuum approximation perspective. Rather than designing a line concept and/or frequencies for a specific city, represented by a PTN and OD-pairs on the PTN, the aim is to make general statements about optimal design of line concepts, frequencies, and capacities. An early contribution in this area is <cit.>, who, coming from a microeconomics perspective, analytically studied the problem of determining optimal frequencies and the number of bus stops for a so-called steady-state route, where passengers' origins and destinations are uniformly distributed along the route and bus stops are to be spaced evenly along the route. An optimal frequency is obtained by balancing operator cost (which increase with frequency) and passenger waiting times (which decrease with frequency). This leads to the famous square root formula which states that the frequency should be proportional to the square root of the number of passengers. This line of research has been developed further in, e.g., <cit.>. <cit.> are able to generalize the square root result to lines on networks. Based on these results there is also a stream of publications that consider transit network design on parametric networks, i.e., networks whose topology and demand structure are fully determined by a few input parameters. The goal of most of these publications is to determine optimal design parameters for stylized line concepts (feeder trunk, hub and spoke etc.) and to specify optimal line concepts based on the input parameters with respect to a weighted sum of costs and travel time. <cit.> represent the area for which the line plan is to be designed as a network consisting of a central business district node (CBD) connected to several subcenter nodes which are in turn connected to neighboring subcenters and to periphery nodes. Distances between CBD and subcenters and between subcenters and periphery nodes, as well as the demand structure between the different types of nodes are controlled parameters. They analyze the performance of four general network structures (direct lines, feeder trunk, hub and spoke, and exclusive) under some common transit network design assumptions. For each of the proposed structures, they determine optimal frequencies and capacities for the lines with respect to an objective function composed of operational costs (depending on the total number of vehicles needed and the vehicle capacity, but notably not on the distance driven) and total passenger travel time (composed of in-vehicle time, adaption time, and transfer penalties). Due to the regular network structure, they are able to express the different cost components as functions of the parameters and to compute optimal frequencies and capacities numerically when parameter values are given. <cit.> enrich the model from <cit.> by line spacing. <cit.> compare the performance of the general line network structures proposed in <cit.> with line plans found by approaches developed for transit line planning problems with specific (non-parametrically given) PTN and demand. <cit.> propose a mixed-integer linear program to compute (symmetric) line concepts on parametric cities and discuss the price of symmetry. Analyses similar to <cit.> have been done also for other underlying network structures like grid networks <cit.> and radial networks <cit.>. §.§ Further related problems During disruptions or construction work at rail infrastructure, the corresponding connections are often bridged by buses, leading to ad-hoc route generation, route selection and frequency assignment problems. The bus bridging problem has been studied in, e.g., <cit.>, for further references see also the corresponding section in the recent survey on urban rail disruption management by <cit.>. On-demand transportation does often not operate on fixed routes, but is planned flexibly based on passengers origins and destinations. This transport mode has been used for a long time to transport people with reduced mobility access where requests are normally known in advance. Recently, it has sparked new interest, also in its online version, with the advent of autonomous vehicles on the horizon. See, e.g., <cit.>. A recent survey is given in <cit.>. <cit.> study the edge (link) investment problem: which segments of a bus line should be upgraded to be used by bus rapid transport, assuming that there are multiple parties responsible for the investments. To avoid bus bunching, different control strategies are possible, such as waiting, stop-skipping, short-turning. See, e.g., <cit.> and references therein. <cit.> investigate the problem for rail-based transport. A survey is given in <cit.>. There is also a stream of research dealing with the question how to visualize line plans according to criteria like preservation of network topology and relative position of stations, uniform link lengths between stations, few bends along individual metro lines, or limited number of link orientations. A mixed integer linear programming approach to the problem has been described in <cit.>. This approach has been used to generate the line plan in Figure <ref>. For more on line plan visualization, see <cit.>. § OUTLOOK AND FURTHER RESEARCH We hope that this article made a small contribution to the following two goals which we consider as important for a further development of the field of transit planning. The first is to bring researchers from different communities even closer together. We believe that cooperations will help to improve research on optimization of public transport significantly. The second challenge is to use more results of transit line planning for optimizing real-life applications, in particular in view of climate changes, and eventually make transit line planning procedures available in practice whenever they are needed. § ACKNOWLEDGMENTS The authors thank Martin Nöllenburg und Alexander Wolff for providing Figure<ref>. apalike
http://arxiv.org/abs/2405.09258v1
20240515111804
Characterizing the correlation properties of the atmospheric emission in the 10-20 GHz range with QUIJOTE MFI data
[ "Apolline Chappard", "José Alberto Rubiño-Martín", "Ricardo Tanausú Génova-Santos" ]
astro-ph.CO
[ "astro-ph.CO" ]
unsrt #1#2#3#4#1 #2, #3 (#4) Nuovo Cimento Nucl. Instrum. Methods Nucl. Instrum. Methods A Nucl. Phys. B Phys. Lett. B Phys. Rev. Lett. Phys. Rev. D Z. Phys. C ϵ^' ε → π^+π^-γ p K^0 K̅^̅0̅ α α̅ CP-1.80em/ ^1 Institut d'Astrophysique Spatiale (IAS), CNRS, Bât 120 – 121 Univ. Paris-Saclay, ORSAY CEDEX, 91405, France ^2 Instituto de Astrofísica de Canarias (IAC), C/ Vía Láctea, La Laguna, E-38205, Tenerife, Spain ^3 Departamento de Astrofísica, Universidad de La Laguna (ULL), La Laguna, E-38206, Tenerife, SpainCharacterizing the correlation properties of the atmospheric emission in the 10-20 GHz range with QUIJOTE MFI data Apolline Chappard ^1,2, José Alberto Rubiño-Martín ^2,3, and Ricardo Tanausú Génova-Santos ^2,3 May 20, 2024 ================================================================================================================== The QUIJOTE MFI instrument (2012-2018) observed the sky at four frequency bands, namely 11, 13, 17 and 19 GHz, at 1 degree angular resolution. Using around 10000 h of observations in the so-called nominal mode, QUIJOTE MFI produced sky maps covering approximately 29000 deg^2. Here we use the full database of MFI wide survey observations to characterize the correlation properties of the atmospheric signal in those frequency bands. This information will be useful to improve the current sky models at these frequencies, and could be used in further MFI reanalyses, or for the preparation of future observations at these frequencies (e.g., MFI2 and the Tenerife Microwave Spectrometer). § INTRODUCTION The next goal for Cosmic Microwave Background (CMB) research is the detection of primordial B-modes, which are a prediction of the theory of inflation <cit.>. In this theory, at a time of 10^-36 s after the Big Bang, the size of the universe expanded exponentially by at least 50 to 60 e-fold in a very brief period of time <cit.>. This process would produce a background of gravitational waves that could be indirectly observed today through their signature on the CMB polarization as B-mode polarization, parametrized with the tensor-to-scalar ratio r<cit.>. However, specific models of inflation, characterized by specific inflationary potentials, do predict r to be in certain ranges. For instance, the Starobinsky R^2 model predicts r to be between 0.003 and 0.01 (depending on the exact value of n_S). Today, we only have an upper limit for r, being r<0.030<cit.>. Hence, achieving increased sensitivity is imperative to detect smaller signals obscured by various B-mode sources (galactic and extra-galactic), instrumental noise, and primarily, by the atmosphere for ground-based telescopes. In this study, we performed a statistical analysis of part of the data collected by the Multi Frequency Instrument (MFI) of the QUIJOTE (Q-U-I JOint TEnerife) experiment, the so called MFI wide survey <cit.>. Our aim was to glean insights into the atmospheric signal, a crucial endeavor for the advancement of next-generation ground-based telescopes, which need a deeper understanding of the impact of the atmosphere. § THE QUIJOTE CMB EXPERIMENT QUIJOTE <cit.> is a CMB experiment located at the Teide Observatory (Tenerife, Spain), a site at an elevation of 2.400 m with a long tradition of CMB research due to its excellent atmospheric conditions. QUIJOTE is a scientific collaboration between Spain and the UK consisting of two telescopes (cf. Figure <ref> left). Both telescopes are mounted on a platform that can rotate around the vertical axis at a maximum frequency of 6 rpm (i.e., 36 deg/s). The scientific goals of QUIJOTE are 1) to detect the imprint of gravitational B-modes if they have an amplitude greater or equal to r= 0.05 and 2) to provide essential information of the polarization of the synchrotron and the anomalous microwave emissions from our Galaxy at low frequencies (10–40 GHz). Several instruments are mounted on both telescopes. On QT1, the MFI (cf. Figure <ref> right) operated between November 2012 and October 2018 and consisted of four polarimeters (horns): two measuring at 11-13 GHz and two at 17-19 GHz. The MFI database used to construct the wide survey consists of approximately 10,000 hours of observations, organized into 1,300 data files, each spanning 8 hours. The MFI signal comprises various components, such as the astrophysical signals we aim to extract (CMB and galactic emission), but also the instrumental noise and the atmospheric signal. All these components add up to form the signal we measure <cit.>. § ATMOSPHERIC SIGNAL The biggest challenge for ground-based telescopes is that they observe through the atmosphere. Even though the atmosphere is in principle unpolarized, CMB polarization observations are affected through leakage of intensity to polarization. Indeed, atmospheric emission is a dominant source of noise in intensity measurements, since the atmosphere emits at CMB relevant frequencies. The primary sources of contamination for the CMB signal are the emissions from molecular oxygen at 60 and 120 GHz, and the emissions from water vapour at 22 and 183 GHz <cit.>. Among those, water vapour is the most problematic since it has an inhomogeneous and turbulent distribution, which results in irregular, chaotic and ultimately unpredictable fluctuations of the atmospheric signal <cit.>. Therefore we need to monitor the Precipitable Water Vapour (PWV) on site in order to properly model the level of contamination and atmospheric extinction at the observing frequencies. This monitoring is done by a nearby GPS antenna at Teide Observatory, from which we find a median PWV of 3.4 mm across all QUIJOTE MFI nominal data observations. In comparison, the median PWV on the Atacama Cosmology Telescope (ACT) site (Atacama Desert, north of Chile) is 2.39 mm in summer and 1.04 mm in winter <cit.>. A useful approach for modeling the inhomogeneous (i.e. turbulent) atmospheric emission is the Kolmogorov model, which postulates that an unconstrained, minimally viscous fluid such as the atmosphere is maximally turbulent and has a velocity field with scale-invariant statistics. According to this model, water vapour will be distributed in agreement with the spatial power spectrum P(k) ∝ |k|^-11/3, with k the wavenumber measuring the spatial frequency of turbulent eddies in the fluid flow <cit.>. One key ingredient to model the atmospheric signal is the scanning strategy. For QUIJOTE, this strategy consists of measuring the sky at a constant elevation and in azimuth circles, with a scan speed of 12 deg/s (one azimuth circle every 30 s) <cit.>. Given that two horns (2 and 4) of MFI observe the sky at the same frequency band (17–19 GHz) but in slightly different sky regions (they are separated approximately by 5.6^∘ on the sky), they will observe a common-mode atmospheric signal. However, their noise contribution will be statistically independent since each horn is equipped with a different Low Noise Amplifier (LNA). Hence, comparing the signals from those two horns allows the identification of the common atmospheric signal. § ANALYSIS AND RESULTS We performed a statistical analysis of the intensity signals from MFI horns 2 and 4 in order to 1) assess the temporal stability of the atmosphere signal and 2) analyze the frequency distribution in the atmospheric signal through the computation of the power spectral density. First, we computed the cross-correlation function of the signals of horn 2 and 4 at 17 GHz and then at 19 GHz. The discrete cross-correlation function between two signals f and g at time shift τ is given by C[τ] = ∑_n=0^N-1f(n) g(n + τ)/N √(σ_f^2 ·σ ^2_g), where N is the number of elements in both signals f and g, and σ_f/g^2 is the variance (auto-correlation function at zero lag). After selecting all data with a given azimuth value, the correlation function is computed for all possible time shifts and plotted against this time shift (cf. Figure <ref> left for an example with one data file of MFI). A maximum in the correlation function implies a strong similarity of the signals, while a correlation curve close to zero indicates no correlation. On those plots, we systematically observe a correlation peak at zero lag. This peak indicates that both horns measure the same atmospheric signal. In order to quantify the time coherence of the correlation between the two signals, we define the Coherence Length (CL) as the characteristic time over which the signals remain correlated, reflecting the duration of stability in the atmospheric signal. The coherence length was determined from the width of the correlation peak by fitting a Gaussian curve to each dataset's correlation peak and determining the width of the Gaussian curve at 1/15 of the Gaussian maximum amplitude. Mathematically, this is expressed as CL = 2 √(2 ln(15))σ with σ the standard deviation of the Gaussian fit. After calculating this coherence length for each dataset of MFI at each azimuth (in order to compare the same area of the sky), we derived the median coherence length at each azimuth (cf. Figure <ref> right). Our findings indicate that the atmosphere stays stable for a duration of approximately 2 to 3 hours. Furthermore, it is noteworthy to verify whether our atmospheric signal follows a Kolmogorov spectrum. In order to do so, we computed the cross power spectral density (CPSD), which represents the distribution of a signal across different frequencies in the frequency spectrum, for signals of horn 2 and 4 at 17 GHz and then at 19 GHz (cf. Figure <ref>). The CPSD is defined as Γ_fg = ( f̂·ĝ ), with f̂/ĝ the Fourier transform of signal f/g. We found that the CPSD of our data follows roughly a Kolmogorov spectrum. § CONCLUSION AND OUTLOOK The atmosphere contaminates astrophysical signal at CMB relevant frequency for ground based telescopes. Being able to model and extract the atmospheric signal correctly is essential in order to reach the sensitivity needed to measure the B-modes. In this analysis, we found that the atmospheric signal remains stable over a period of 2 to 3 hours, and that the water vapour is distributed roughly according to a Kolmogorov spectrum. Next, we will investigate the relation between wind speed and coherence length, and the variation of the atmospheric signal within a year. § REFERENCES 99Ka M. Kamionkowski et al.Phys. Rev. Letters, 78(11), 2058 (1997). Gu A. H. Guth, Phys. Rev. D, 23(2), 347 (1981). Za M. Zaldarriaga and U. Seljak, 55(4)18301997. Ga G. Galloni et al., arXiv:2405.04455. Ru J. A. Rubiño-Martín et al., Mon. Notices Royal Astron. Soc. 519(3), 3383-3431 (2023). Ge R. Génova-Santos et al., Mon. Notices Royal Astron. Soc.452(4), 4169-4182 (2015). Pa S. Paine, The am atmospheric model, SMA technical memo <https://doi.org/10.5281/zenodo.438726> (2017). MoT. W. Morris et al., 105(4)0420042022. Ko A. N. Kolmogorov, Dokl. Akad. Nauk. SSSR.32 (1941).
http://arxiv.org/abs/2405.08940v1
20240514201032
Dynamical systems and complex networks: A Koopman operator perspective
[ "Stefan Klus", "Nataša Djurdjevac Conrad" ]
math.DS
[ "math.DS", "stat.ML" ]
1]Stefan Klus 2]Nataša Djurdjevac Conrad [1]School of Mathematical & Computer Sciences, Heriot–Watt University, Edinburgh, UK [2]Zuse Institute Berlin, Berlin, Germany Dynamical systems and complex networks: A Koopman operator perspective [ ======================================================================== The Koopman operator has entered and transformed many research areas over the last years. Although the underlying concept—representing highly nonlinear dynamical systems by infinite-dimensional linear operators—has been known for a long time, the availability of large data sets and efficient machine learning algorithms for estimating the Koopman operator from data make this framework extremely powerful and popular. Koopman operator theory allows us to gain insights into the characteristic global properties of a system without requiring detailed mathematical models. We will show how these methods can also be used to analyze complex networks and highlight relationships between Koopman operators and graph Laplacians. § INTRODUCTION This perspective article is meant to be a self-contained introduction to and review of transfer operators such as the Koopman operator and the Perron–Frobenius operator as well as an overview of different applications. We will first introduce the required foundations and then show how transfer operators can not only be used to analyze highly nonlinear dynamical systems but also complex networks. In particular, we will focus on relationships between transfer operators for continuous-time stochastic processes defined on a continuous state space—whether they be reversible, non-reversible but time-homogeneous, or time-inhomogeneous—and their discrete counterparts associated with random walks on undirected, directed, and time-evolving graphs. Transfer operators play an important role in an increasing number of research fields. A few exemplary applications are illustrated in Figure <ref>. More details regarding the specific application areas can be found in the following publications: (a) molecular dynamics <cit.>, (b) fluid dynamics <cit.>, (c) climate science <cit.>, (d) quantum physics <cit.>, (e) chaotic dynamical systems <cit.>, (f) system identification <cit.>, (g) control theory <cit.>, and (h) graphs and networks <cit.>. Koopman-based methods have also been applied to video data, EEG recordings, traffic flow data, stock prices, and various other data sets. The goal is always to identify characteristic dynamical properties of the underlying system from data. In molecular dynamics, we are often interested in detecting slow processes such as the folding and unfolding of proteins <cit.>. Conformations of molecules can be regarded as metastable states. Metastability implies that on short timescales the system appears to be equilibrated, but on longer timescales undergoes rare transitions between such metastable states. That is, the system will typically spend a long time in one part of the state space before it transitions to another region. Such metastable states are caused by deep wells in the energy landscape. Crossing the energy barrier from one well to another well is a rare event. Metastability is reflected in the spectrum of associated transfer operators, where the number of dominant eigenvalues corresponds to the number of metastable states. The eigenvalues are related to inherent timescales and the eigenfunctions contain information about the locations of the metastable states. Methods for detecting metastability also have important applications in climate science or agent-based modeling, see, for instance, <cit.>. In the same way, clusters in a graph can be interpreted as metastable states of a random walk process defined on the graph <cit.>. A cluster is a set of vertices such that on short timescales the random walk process will, with a high probability, explore the cluster before moving to another part of the graph. Transitions between clusters are rare events and can be regarded as crossing an energy barrier caused by edges with low transition probabilities. Analogously, the cluster structure is reflected in the spectrum of associated graph Laplacians or, equivalently, transfer operators defined on graphs <cit.>. The number of dominant eigenvalues corresponds to the number of clusters and the associated eigenvectors contain information about the locations of the clusters. Over the last decades, many different clustering techniques based on properties of graph Laplacians have been developed, see, e.g., <cit.>. The decomposition into metastable states but also spectral clustering methods rely on the existence of a so-called spectral gap. That is, we assume that there exist only a few isolated dominant eigenvalues close to one (or zero if we consider generators of transfer operators or graph Laplacians) and that the subsequent eigenvalues are significantly smaller. In practice, however, this is not necessarily the case. If there is no well-defined spectral gap, this typically indicates that the process is not metastable or that there are no weakly coupled clusters. Furthermore, the spectra of Koopman operators and graph Laplacians are in general only real-valued if the underlying stochastic process is reversible or, analogously, if the graph is undirected. Non-reversible processes and random walks on directed graphs typically result in complex-valued eigenvalues and eigenfunctions or eigenvectors, and conventional methods typically fail to identify meaningful slowly evolving spatio-temporal patterns or clusters. A generalization of metastable sets to non-reversible and time-inhomogeneous processes, called coherent sets <cit.>, is based on the forward–backward dynamics of the system and gives rise to self-adjoint operators—with respect to suitably weighted inner products—so that the spectrum becomes real-valued again. The notion of coherence has been extended to graphs in <cit.>. The resulting spectral clustering methods can be used to identify clusters in directed and also time-evolving graphs. The aim of this work is to illustrate how methods developed for the analysis of complex dynamical systems can be applied to graphs, but we will also turn the question around and ask how graph algorithms can be used to analyze dynamical systems in the hope that this will lead to cross-fertilization of the two scientific disciplines. The remainder of the article is structured as follows: We will first introduce transfer operators associated with stochastic dynamical systems and random walks on graphs in Section <ref>, illustrate how these operators can be approximated and estimated from data in Section <ref>, and then show that different types of dynamics induce different graph structures in Section <ref>. Open problems and future work will be discussed in Section <ref>. § STOCHASTIC PROCESSES, RANDOM WALKS, AND TRANSFER OPERATORS We will briefly review the required dynamical systems and graph theory basics. §.§ Stochastic processes Given a stochastic process {X_t}_t ≥ 0 defined on a bounded state space 𝕏⊂^d and any measurable set 𝔸, the transition density function p_t,τ𝕏×𝕏→_+ is defined by ℙ[X_t+τ∈𝔸| X_t = x] = _𝔸 p_t,τ(x, y) dy. That is, the function p_t,τ(x, ·) describes the probability density of X_t+τ given that X_t = x. We will in particular consider processes governed by stochastic differential equations (SDEs) of the form dX_t = b(X_t, t) dt + σ(X_t, t) dW_t, where b ^d ×_+ →^d is the drift term, σ^d ×_+ →^d × d the diffusion term, and W_t a d-dimensional Wiener process. The overdamped Langevin equation dX_t = -∇ V(X_t) dt + √(2 β^-1)dW_t plays an essential role in molecular dynamics. Here, V ^d → is a given energy potential and β the inverse temperature. As a guiding example, we will consider the case d = 2 and define V(x) = x_1^4 - 116 x_1^3 - 2 x_1^2 + 316x_1 + 98 + x_2^4 - 18 x_2^3 - 2 x_2^2 + 38 x_2 + 54 and set β = 3. This energy landscape has four wells and the deepest well (-1,-1) is located in the lower left quadrant and the shallowest well (1,1) in the upper right quadrant. The potential and trajectories generated using the Euler–Maruyama method with step size h = 10^-3 are shown in Figure <ref>. Given a fixed lag time τ, considering the state only at multiples of τ, i.e., X_0, X_τ, X_2 τ, …, induces a discrete-time non-deterministic dynamical system Θ_t,τ, given by Θ_t,τ(x) ∼ p_t,τ(x, ·). The lag time should be chosen in such a way that it is not too small (transitions of interest might not have occurred yet) and not too large (timescales of interest might be damped out and generating data will be computationally expensive). A process is called time-homogeneous if the transition probabilities from time t to time t + τ depend only on the time difference τ but not on t itself. This is, for instance, the case for the overdamped Langevin equation (<ref>). If the SDE is time-inhomogeneous, then also the transition densities and the transfer operators that we will introduce below are time-dependent. To simplify the notation, we will from now on omit this time-dependence and write p_τ(x, y) instead of p_t,τ(x, y), it will be clear from the context whether or not the system is time-homogeneous. §.§ Random walk processes A stochastic process {X_t}_t ≥ 0 on a finite state space X = {[1]x, …, [n]x} is called a Markov process if, given a present state, the future and past jumps are independent of each other, i.e., if it satisfies the Markov property ℙ[X_t+τ = y |{X_s,s ≤ t}] = ℙ[X_t+τ = y | X_t ], for all τ≥ 0, where {X_s,s ≤ t} is the collection of values of the stochastic process up to time t. A discrete-time Markov process defined on a discrete state space is called a Markov chain. For time-homogeneous Markov chains, i.e., ℙ[X_t+1 = y | X_t = x] = ℙ[X_t = y | X_t-1 = x] = … = ℙ[X_2 = y | X_1 = x], we define a transition probability from state x to state y by p(x,y) = ℙ[X_t+1 = y | X_t = x] for all x, y ∈X. It holds that p(x,y)≥ 0 and ∑_y∈Xp(x,y) = 1. The associated row-stochastic transition matrix S ∈^n×n is defined by s_ij = p([i]x, [j]x), i, j = 1, …, n. We will consider Markov chains associated with graphs. Let G = (X, E, w) be a weighted directed graph, where X = {[1]x, …, [n]x} is a set of n vertices (also called nodes), E⊆X×X a set of edges (also called links), and w X×X→_+ a weight function. Here, w(x, y) > 0 is the weight of the edge (x, y) ∈E and w(x, y) = 0 if (x, y) ∉E. On a given graph G, we can then define a Markov chain as a random walk process determined by the transition matrix S, where the transition probability of going from a vertex x to a vertex y is given by p(x,y) = w(x,y)/d(x), with d(x) = ∑_y ∈X w(x,y). That is, d(x) is the weighted out-degree of a vertex x. In order to illustrate the close relationships between a continuous-time process defined on a continuous state space and random walks on a graph, we coarse-grain the quadruple-well problem introduced in Example <ref> by subdividing the domain 𝕏 = [-1.75, 1.75] × [-1.75, 1.75] into 16 × 16 equally sized boxes. Each box is represented by a vertex and the weight of an edge connecting two vertices is defined to be the transition probability between the corresponding boxes. This will be explained in more detail below. The random-walk process on the induced graph again exhibits metastable behavior as shown in Figure <ref>. This illustrates that metastable sets and clusters are closely related. The random walk process can be viewed as a stochastic differential equation discretized in time and space. §.§ Transfer operators In what follows, we will often introduce two different variants of definitions, one for stochastic differential equations (shown on the left) and one for random walks on graphs (shown on the right). A probability density is a nonnegative function μ satisfying [sidebyside align=bottom, after skip=10pt] _𝕏μ(x) dx = 1, ∑_x ∈Xμ(x) = 1. Let L_μ^q denote the μ-weighted L^q-space of functions such that [sidebyside align=bottom] f_L_μ^q := _𝕏f(x)^q μ(x) d x < ∞, f_L_μ^q := ∑_x ∈Xf(x)^q μ(x) < ∞. Unweighted spaces will simply be denoted by L^q. The μ-weighted duality pairing is defined by [sidebyside align=bottom] fg_μ = _𝕏 f(x) g(x) μ(x) d x, fg_μ = ∑_x ∈X f(x) g(x) μ(x). Let ρ be a probability density, f an observable, and u a function w.r.t. a strictly positive initial density μ such that ρ = μ u. We define the following transfer operators: [sidebyside align=center] 0.91 ==1.4pt2[ 𝒫^τρ(x) = _𝕏 p_τ(y, x) ρ(y) d y,; 𝒦^τ f(x) = _𝕏 p_τ(x, y) f(y) d y,; 𝒯^τ u(x) = 1/ν(x)_𝕏 p_τ(y, x) μ(y) u(y) d y,; ℱ^τ u(x) = _𝕏 p_τ(x, y) 1/ν(y)_𝕏 p_τ(z, y) μ(z) u(z) dz dy,; ℬ^τ f(x) = 1/ν(x)_𝕏 p_τ(y, x) μ(y) _𝕏 p_τ(y, z) f(z) d z d y, ] 0.91 ==1.4pt2[ 𝒫ρ(x) = ∑_y ∈X p(y, x) ρ(y),; 𝒦 f(x) = ∑_y ∈X p(x, y) f(y),; 𝒯 u(x) = 1/ν(x)∑_y ∈X p(y, x) μ(y) u(y),; ℱ u(x) = ∑_y ∈X p(x, y) 1/ν(y)∑_z ∈X p(z, y) μ(z) u(z),; ℬ f(x) = 1/ν(x)∑_y ∈X p(y, x) μ(y) ∑_z ∈X p(y, z) f(z), ] where ν = 𝒫μ is the resulting image density, i.e., [sidebyside align=bottom, after skip=5pt] 0.91 ν(x) = _𝕏 p_τ(y, x) μ(y) dy, 0.91 ν(x) = ∑_y ∈X p(y, x) μ(y), also assumed to be strictly positive. Here, 𝒫 L^2 → L^2 is the Perron–Frobenius operator, which describes the evolution of probability densities, and 𝒯 L_μ^2 → L_ν^2 a reweighted Perron–Frobenius operator, which propagates probability densities w.r.t. the reference density μ. Further, 𝒦 L^2 → L^2 or 𝒦 L_ν^2 → L_μ^2, depending on whether we consider it to be the adjoint of 𝒫 or 𝒯, is the so-called stochastic Koopman operator, which describes the evolution of observables.[Typically, 𝒫^τ and 𝒦^τ are first defined on L^1 and L^∞, respectively, but the operators can be extended to other function spaces L^q and L^q', with 1/q + 1/q' = 1, see <cit.> for more details.] The operators ℱ = 𝒦𝒯 L_μ^2 → L_μ^2 and ℬ = 𝒯𝒦 L_ν^2 → L_ν^2 are called forward–backward operator and backward–forward operator, respectively. For a detailed introduction to transfer operators, see <cit.>. The corresponding graph transfer operators on the right were introduced in <cit.>. Since the state space is in this case finite-dimensional, all norms are equivalent. Note that the transfer operators on the left explicitly depend on the lag time τ, whereas the operators on the right are associated with the discrete-time random walk process. When statements hold for the discrete case and the continuous case, we will sometimes omit the superscript τ to highlight properties these operators have in common. Let us illustrate the definitions of the transfer operators 𝒫^τ and 𝒦^τ using the quadruple-well problem introduced in Example <ref>. The Perron–Frobenius operator describes the evolution of probability densities and the Koopman operator the evolution of observables as shown in Figure <ref>. Since the function spaces L_μ^2 associated with graphs are n-dimensional, we can represent transfer operators by n×n matrices. Let S ∈^n×n be the row-stochastic matrix (<ref>) and define ρ = [ρ([1]x), …, ρ([n]x)]^⊤∈^n, then 𝒫ρ([i]x) = [S^⊤ρ]_i, i.e., S^⊤ can be viewed as a matrix representation P of the Perron–Frobenius operator 𝒫. We obtain the matrix representations P = S^⊤, K = S, T = D_ν^-1 S^⊤ D_μ, F = S D_ν^-1 S^⊤ D_μ, B = D_ν^-1 S^⊤ D_μ S of the corresponding operators, where D_μ = (μ([1]x), …, μ([n]x)) and D_ν = (ν([1]x), …, ν([n]x)) are invertible since we assumed the densities μ and ν to be strictly positive. Properties of transfer operators will be demonstrated in more detail in Section <ref>. §.§ Infinitesimal generators Instead of analyzing spectral properties of transfer operators, we can also consider the associated infinitesimal generators. Given a time-homogeneous stochastic differential equation of the form (<ref>), the Koopman operators {𝒦^τ}_τ≥ 0 form a so-called one-parameter semigroup of operators, whose infinitesimal generator, defined by ℒ f = lim_τ→ 01/τ(𝒦^τ f - f ), is given by ℒ f = ∑_i=1^d b_i fx_i + 1/2∑_i=1^d ∑_j=1^d a_ij^2 fx_i ∂ x_j, where a = σσ^⊤. Its adjoint, the generator of the Perron–Frobenius operator, can be written as ℒ^* ρ = -∑_i=1^d (b_i ρ)x_i + 1/2∑_i=1^d ∑_j=1^d ^2 (a_ijρ)x_i ∂ x_j. The second-order partial differential equations ft = ℒ f and ρt = ℒ^* ρ are called Kolmogorov backward equation and Fokker–Planck equation, respectively. Due to the spectral mapping theorem, see, e.g., <cit.>, if λ is an eigenvalue of the generator, then e^λτ is an eigenvalue of the corresponding operator with lag time τ and the eigenfunctions are identical. We can also write ℒ^* ρ = -∇ J(ρ), with J(ρ) = b ρ - 1/2∇ (a ρ), where J is called the probability flux <cit.>. The infinitesimal generators associated with the Langevin equation (<ref>) can be written as ℒ f = -∇ V ∇ f + β^-1Δ f and ℒ^* ρ = Δ V ρ + ∇ V ∇ρ + β^-1Δρ. The probability flux is given by J(ρ) = -∇ V ρ - β^-1∇ρ. A discrete analogue of the Kolmogorov backward equation for random-walk processes on graphs is the system of linear ordinary differential equations d/dt f = L f,   where   L(x, y) ≥ 0   ∀ x ≠ y and L(x, x) = -∑_y∈X∖{x} L(x, y), which describes consensus dynamics. Here, L is often called a rate matrix, where the off-diagonal entries L(x, y), with x ≠ y, represent the transition rates, i.e., the average number of transitions from x to y per time unit. Similarly, the values -L(x, x) correspond to the escape rates that determine the waiting times of a Markov jump process; the expected waiting time in a node x is |L(x, x)|^-1. The rate matrix L generates a whole family of transition matrices S^τ = exp(τ L), τ≥ 0. Analogously, the differential equation d/dtρ = L^* ρ can be regarded as a continuous-time version of a random walk on a graph and corresponds to the Fokker–Planck equation, see <cit.>. In the field of spectral graph theory, specific choices of the matrix L have been shown to capture important graph characteristics, e.g., the combinatorial graph Laplacian or the normalized graph Laplacian. We establish a connection with the random-walk graph Laplacian that is defined as L_rw(x, y) = 1, x = y, -w(x, y)/d(x), x ≠ y, (x, y)∈E, 0, otherwise, so that L = -L_rw is a rate matrix. Here, the expected waiting time in a node is proportional to its degree, which implies that the process is slower in nodes with high degrees, such as densely connected clusters. Other variants of time-continuous random walks have been considered in <cit.>. Since it holds that L_rw = I - K, computing the largest eigenvalues of the matrix K is equivalent to computing the smallest eigenvalues of the random-walk graph Laplacian. This shows that conventional spectral clustering relies on the dominant eigenfunctions of the Koopman operator and can be interpreted in terms of metastable sets. A natural generalization of spectral clustering to directed and time-evolving graphs is thus to use the forward–backward operator so that the resulting clusters can be viewed as coherent sets. We call the matrix L_fb = I - F forward–backward Laplacian. Detailed derivations and applications can be found in <cit.>. §.§ Invariant density and reversibility In what follows, we will consider the reweighted Perron–Frobenius operator 𝒯 with respect to different probability densities. A particularly important role is played by the invariant density. A density π is called invariant if 𝒫π = π. That is, π is an eigenfunction of the Perron–Frobenius operator corresponding to the eigenvalue λ = 1. Let us consider two simple examples: * The invariant density of the overdamped Langevin equation (<ref>) is given by π(x) = 1/Z e^-β V(x), with Z = _𝕏 e^-β V(x)dx, since it can be easily shown that J(π) = 0 and thus ℒ^* π = -∇ J(π) = 0. That is, π is an eigenfunction of ℒ^* associated with the eigenvalue λ = 0 and hence an eigenfunction of 𝒫^τ associated with the eigenvalue λ = 1. * Given a connected undirected graph G, the invariant density is given by π(x) = 1/Z d(x), with Z = ∑_x ∈X d(x), since 𝒫π(x) = ∑_y ∈X p(y, x) π(y) = ∑_y ∈Xw(y, x)/d(y)1/Z d(y) = 1/Z∑_y ∈X w(x, y) = 1/Z d(x) = π(x), where we used the fact that w(x, y) = w(y, x) since the graph is undirected. If we choose μ to be the invariant density (assuming it exists), i.e., μ = π, this immediately implies that ν = π and the operator 𝒯 can be written as [sidebyside align=bottom] 𝒯^τ u(x) = 1/π(x)_𝕏 p_τ(y, x) π(y) u(y) d y, 𝒯 u(x) = 1/π(x)∑_y ∈X p(y, x) π(y) u(y). The Perron–Frobenius operator reweighted with respect to the invariant density π will be considered in more detail below. The process is called reversible if there exists a probability density π that satisfies the detailed balance condition [sidebyside align=bottom, after skip=10pt] π(x) p_τ(x, y) = π(y) p_τ(y,x)   ∀ x, y ∈𝕏, π(x) p(x, y) = π(y) p(y, x)   ∀ x, y ∈X. A stochastic process with generator ℒ is reversible if and only if the stationary probability flux J(π) vanishes <cit.>. Let us again consider the guiding examples: * We have seen above that the probability flux J(π) for the overdamped Langevin equation is identically zero so that the process is reversible. Note that this is a stronger property than ℒ^* π = -∇ J(π) = 0, which only implies that the probability flux is divergence-free. * A random walk process on an undirected graph is reversible since π(x) p(x, y) = 1/Z d(x) w(x, y)/d(x) = 1/Z d(y) w(y, x)/d(y) = π(y) p(y, x). * Using the detailed balance condition, we have 𝒯_τ f(x) = 1/π(x)_𝕏 p_τ(y, x) π(y) f(y) dy = _𝕏 p_τ(x, y) f(y) d y = 𝒦_τ f(x), 𝒯 f(x) = 1/π(x)∑_y ∈X p(y, x) π(y) f(y) = ∑_y ∈X p(x, y) f(y) = 𝒦 f(x). This shows that for reversible dynamical systems the transfer operator 𝒯 reweighted with respect to the invariant density π is identical to the Koopman operator 𝒦. Furthermore, we obtain 𝒫fg_π^-1 = f𝒫 g_π^-1 and 𝒦fg_π = f𝒦g_π. That is, if the stochastic process is reversible, the operators 𝒫 and 𝒦 are self-adjoint, which implies that the eigenvalues are real-valued and the eigenfunctions form an orthogonal basis with respect to the respective reweighted inner products. The slow dynamics of the process are encoded in the eigenfunctions φ_ℓ corresponding to the eigenvalues λ_ℓ≈ 1 since then 𝒦φ_ℓ = λ_ℓφ_ℓ≈φ_ℓ. Dynamics associated with small eigenvalues, on the other hand, are damped out quickly. §.§ Non-reversible processes One approach to accelerate the convergence of a process to the invariant density is to depart from reversible dynamics <cit.>. The degree or irreversibility of the system can be quantified through the entropy production rate, which has been studied both for non-equilibrium Markov processes <cit.> as well as for directed graphs <cit.> and their cycle decompositions <cit.>. Let us modify the examples considered above: * A simple way to construct a non-reversible system is to add a divergence-free term (with respect to the invariant distribution), e.g., dX_t = (-∇ V(X_t) + M ∇ V(X_t)) dt + √(2 β^-1)dW_t, where M is an antisymmetric matrix. The invariant density of this SDE is again π(x) = 1/Z e^-β V(x) since the probability flux J(π) = 1/Z M ∇ V e^-β V is divergence-free, but it does not vanish. That is, the process admits an invariant distribution, but is not reversible. * Given an undirected graph G = (X, E, w) with invariant density π, it is possible to construct a directed graph G = (X, E, w) with the same invariant density by defining w(x, y) = w(x, y) + m(x, y), with ∑_x ∈X m(x, y) = ∑_y ∈X m(x, y) = 0, since then d(x) = d(x) and p(x, y) = p(x, y) + m(x, y)/d(x) so that 𝒫π(x) = ∑_y ∈Xp(y, x) π(y) = ∑_y ∈X p(y, x) π(y) + ∑_y ∈Xm(y, x)/d(y)π(y) = π(x), where 𝒫 is the Perron–Frobenius operator associated with the modified graph G. The second sum is zero as π(y)/d(y) = 1/Z and the m(y, x) terms sum up to zero. If the stochastic differential equation is non-reversible or, analogously, the graph is not undirected, then the eigenvalues of the Perron–Frobenius and Koopman operators are in general complex-valued and conventional methods to detect metastable sets or clusters typically fail, see <cit.> for more details. To circumvent this problem, a common approach is to consider eigenvalues and eigenfunctions of the forward–backward and backward–forward operators instead. The eigenvalues of ℱ and ℬ are contained in [0, 1]. The proof requires auxiliary results that are contained in Appendix <ref>. Corollary <ref> implies that the eigenvalues of ℱ and ℬ are real-valued and non-negative. Combining this with Lemma <ref> concludes the proof. In order to detect coherent sets, we compute eigenfunctions φ_ℓ corresponding to large eigenvalues λ_ℓ≈ 1 of the forward–backward operator ℱ so that ℱφ_ℓ = λ_ℓφ_ℓ≈φ_ℓ. These functions are now almost invariant under the forward–backward dynamics. Let ψ_ℓ := 1/√(λ_ℓ)𝒯φ_ℓ, then 𝒦ψ_ℓ = √(λ_ℓ)φ_ℓ, 𝒯φ_ℓ = √(λ_ℓ)ψ_ℓ, ℱφ_ℓ = λ_ℓφ_ℓ, ℬψ_ℓ = λ_ℓψ_ℓ. This shows that the eigenfunctions of the forward–backward operator can also be interpreted as the singular functions of the reweighted Perron–Frobenius operator and the eigenfunctions of the backward–forward operator as the singular functions of the Koopman operator. § APPROXIMATION OF TRANSFER OPERATORS Transfer operators associated with stochastic processes defined on continuous state spaces are infinite-dimensional. In practice, however, we have to restrict ourselves to finite-dimensional subspaces. Furthermore, we also typically have to estimate approximations of these operators from data. Graph transfer operators, on the other hand, are already finite-dimensional and can be easily computed from the adjacency matrices. §.§ Galerkin approximation of transfer operators Let {ϕ_i }_i=1^n be a set of n linearly independent basis functions spanning an n-dimensional subspace of the domain of an arbitrary operator 𝒜 that we want to approximate and ϕ(x) = [ϕ_1(x), …, ϕ_n(x)]^⊤∈^n. That is, any function f in this subspace can be written as f(x) = ∑_i=1^n c_i ϕ_i(x) = c^⊤ϕ(x), where c = [c_1, …, c_n]^⊤∈^n. The Galerkin projection 𝒜_ϕ of the operator 𝒜 onto the n-dimensional subspace is defined by the matrix A_ϕ = [G_0(μ)]^-1 G_1(𝒜, μ) ∈^n × n, with entries [G_0(μ)]_ij = ϕ_iϕ_j_μ and [G_1(𝒜, μ)]_ij = ϕ_i𝒜ϕ_j_μ. We then obtain 𝒜_ϕ f(x) = (A_ϕ c)^⊤ϕ(x). Eigenfunctions of 𝒜 can hence be approximated by eigenfunctions of 𝒜_ϕ. Let ξ_ℓ be an eigenvector of A_ϕ with associated eigenvalue λ_ℓ, then defining φ_ℓ(x) = ξ_ℓ^⊤ϕ(x) yields 𝒜_ϕφ_ℓ(x) = (A_ϕξ_ℓ)^⊤ϕ(x) = λ_ℓξ_ℓ^⊤ϕ(x) = λ_ℓφ_ℓ(x). Lemma <ref> immediately implies that G_1(𝒯, ν) = G_1(𝒦, μ)^⊤ since [G_1(𝒯, ν)]_ij = ϕ_i𝒯ϕ_j_ν = 𝒦ϕ_iϕ_j_μ = [G_1(𝒦, μ)]_ji. Although the state space of graph transfer operators is already finite-dimensional and we can easily construct the matrix representations of the operators, a Galerkin approximation can still be used to project the dynamics given by the random walk process onto a lower-dimensional space. This reduces the computational complexity of the resulting eigenvalue problems. A open problem, however, is how to choose the basis functions in such a way that the dominant spectrum and thus the cluster structure is retained, see <cit.> for further details and examples. §.§ Data-driven approximation of transfer operators Typically, the integrals required for the Galerkin approximation are estimated from trajectory data using Monte Carlo integration. Given training data { (x^(k), y^(k)) }_k=1^m, where x^(k) is sampled from the distribution μ and y^(k) = Θ_τ(x^(k)) is hence sampled from the distribution ν, we define Φ_x = [ ϕ(x^(1)), ϕ(x^(2)), …, ϕ(x^(m)) ], Φ_y = [ ϕ(y^(1)), ϕ(y^(2)), …, ϕ(y^(m)) ], and the matrices C_xx = 1/m Φ_x Φ_x^⊤, C_yy = 1/m Φ_y Φ_y^⊤, C_xy = 1/m Φ_x Φ_y^⊤, C_yx = 1/m Φ_y Φ_x^⊤ so that [C_xx]_ij = 1/m ∑_k=1^m ϕ_i(x_k) ϕ_j(x_k) m →∞⟶ ϕ_iϕ_j_μ= [G_0(μ)]_ij, [C_yy]_ij = 1/m ∑_k=1^m ϕ_i(y_k) ϕ_j(y_k) m →∞⟶ ϕ_iϕ_j_ν= [G_0(ν)]_ij [C_xy]_ij = 1/m ∑_k=1^m ϕ_i(x_k) ϕ_j(y_k) m →∞⟶ ϕ_i𝒦^τ ϕ_j_μ= [G_1(𝒦^τ, μ)]_ij, [C_yx]_ij = 1/m ∑_k=1^m ϕ_i(y_k) ϕ_j(x_k) m →∞⟶ 𝒦^τ ϕ_iϕ_j_μ= [G_1(𝒯^τ, ν)]_ij. That is, we can approximate the Galerkin projections of 𝒦 and 𝒯 and hence also of ℱ and ℬ from data. The matrix representations of the estimated Galerkin projections are K_ϕ≈ C_xx^-1 C_xy, T_ϕ≈ C_yy^-1 C_yx, F_ϕ≈ C_xx^-1 C_xy C_yy^-1 C_yx, B_ϕ≈ C_yy^-1 C_yx C_xx^-1 C_xy. This data-driven estimation of the Koopman operator, later generalized to other transfer operators, is called extended dynamic mode decomposition (EDMD) <cit.>. §.§ Ulam's method One of the simplest and also most popular approaches for approximating transfer operators is Ulam's method <cit.>, which can be regarded as a Galerkin projection of the operator onto a finite-dimensional subspace spanned by indicator functions <cit.>. Let {𝔹_1, …, 𝔹_n } be a decomposition of 𝕏⊂^d into n disjoint sets (e.g., a box discretization of the domain), i.e., ⋃_i=1^n 𝔹_i = 𝕏 and 𝔹_i ∩𝔹_j = ∅ ∀ i j, and ϕ_i := 1_𝔹_i the corresponding indicator functions defined by 1_𝔹_i(x) = 1, x ∈𝔹_i, 0, otherwise. The resulting subspace spanned by these basis functions consists of piecewise constant functions. As the sets 𝔹_i are disjoint by definition, G_0(μ) is a diagonal matrix, whose entries are identical to the row sums of the matrix G_1(𝒦^τ, μ). That is, K_n^τ = [G_0(μ)]^-1 G_1(𝒦^τ, μ) is a row-stochastic matrix containing the transition probabilities between the disjoint sets. If we view the matrix G_1(𝒦^τ, μ) as a weighted adjacency matrix of an induced graph, then the matrix K_n^τ is the transition probability matrix S. Instead of analyzing trajectories of the time-continuous stochastic process, we then consider the corresponding coarse-grained map defined by the discrete random walk process on the induced graph as illustrated in Example <ref>. Note that the graph structure depends on the lag time τ. Computing metastable sets using Ulam's method is thus equivalent to spectral clustering of the induced graph. § GRAPH REPRESENTATIONS OF STOCHASTIC PROCESSES In this section, we will analyze properties of transfer operators associated with different types of dynamics and their induced graph representations. §.§ Time-homogeneous reversible processes and undirected graphs We first restrict ourselves to time-homogeneous reversible stochastic processes such as the overdamped Langevin dynamics and random walks on undirected graphs (see Example <ref>). Approximating the Koopman operator 𝒦^τ with respect to the π-weighted inner product using Ulam's method, the matrix G_1(𝒦^τ, π) is symmetric since [G_1(𝒦^τ, π)]_ij = ϕ_i𝒦^τϕ_j_π = 𝒦^τϕ_iϕ_j_π = [G_1(𝒦^τ, π)]_ji and the associated induced graph is automatically undirected. Let us again analyze the quadruple-well problem introduced in Example <ref>. We set τ = 1/10, subdivide the domain 𝕏 = [-1.75, 1.75] × [-1.75, 1.75] into 16 × 16 boxes of the same size, and estimate the Koopman operator from one long trajectory containing 100000 training data points using Ulam's method. A visualization of the drift term, the resulting undirected graph, and the clustering of the domain into metastable sets as well as the clustering of the graph are shown in Figure <ref>. Each well of the potential results in a cluster of highly connected vertices that is only weakly coupled to the other clusters. This is due to the metastability of the system as transitions between wells are rare events. The deepest well in the lower-left quadrant is, as expected, the most interconnected. The Koopman operator associated with the stochastic differential equation has four dominant eigenvalues, followed by a spectral gap. Applying k-means to the dominant eigenfunctions results in four metastable sets. Equivalently, applying spectral clustering to the graph yields four clusters. Once we have estimated the transition probabilities between the boxes using Ulam's method, we can view the computation of metastable sets as a graph clustering problem. This shows that detecting metastable sets associated with reversible processes is equivalent to finding clusters in undirected graphs. §.§ Time-homogeneous non-reversible processes and directed graphs The graphs induced by non-reversible processes are in general not undirected since the Koopman operator is not self-adjoint anymore. Let us illustrate the differences between the reversible case and the irreversible case with the aid of a simple example. Consider the quadruple-well potential introduced in Example <ref>, but now using the non-reversible stochastic differential equation (<ref>) with the antisymmetric matrix M = [ 0 c; -c 0 ], where c ∈ is a parameter. If c is large, then the additional non-reversible term, which causes curls in the vector field, dominates the dynamics. We use the same box discretization, lag time, and number of test points for estimating the transition probabilities as in the previous example. Since the process is now non-reversible, the eigenvalues of the Koopman operator 𝒦^τ are complex-valued. Applying k-means to the associated eigenfunctions (using, e.g., only the real parts), does in this case not result in the expected clustering into four metastable sets. Using the eigenvalues and eigenfunctions of the forward–backward operator ℱ instead allows us to detect coherent sets. The numerical results are shown in Figure <ref>. The transition probabilities between clusters are now higher and there are more long-range connections due to the faster dynamics, but the graph still comprises four clusters corresponding to the four wells of the potential. The example illustrates that coherent sets can be regarded as a natural generalization of clusters to directed graphs. The only difference here is that we use a different transfer operator for clustering. §.§ Time-inhomogeneous processes and time-evolving graphs For the time-homogeneous systems considered above, we simply estimated the Koopman operator from one long trajectory so that the training data points are sampled from the invariant density. This is possible since the dynamics do not explicitly depend on the time t. Given a time-inhomogeneous system, on the other hand, the Koopman operator is time-dependent as well. We modify the potential introduced in Example <ref> in such a way that one of the wells vanishes as shown in Figure <ref>. That is, the resulting stochastic differential equation and the associated transfer operators are now time-dependent. Depending on the lag time, there are either four (if τ is small) or only three (if τ is large) coherent sets. We choose τ = 3, compute the forward–backward operator and its dominant eigenvalues and eigenfunctions, and then use SEBA <cit.> to extract coherent sets. Unlike k-means, SEBA computes membership functions and does not assign all boxes to one of the clusters. The eigenfunctions of the forward–backward operator associated with a time-dependent stochastic differential equation contain important information about slowly mixing sets. The corresponding graph clustering approach is now based on random walks on a time-evolving graph. In practice, however, we would not generate random walk data to detect clusters, but directly compute the associated transfer operators using the adjacency matrices. There are different ways to take the time-dependency into account: In <cit.>, we proposed computing products of transition probability matrices. This approach detects clusters that are coherent from the initial time to the final time as shown in Example <ref>, but fails to identify clusters that are only coherent for a short time and then vanish. Similarly, new clusters might form within the considered time interval or existing clusters might merge or split. Often several of these events will occur simultaneously so that it is difficult to detect and track such changes. In order to be able to handle these more complicated scenarios, we have to extend existing spectral clustering methods. A popular approach for clustering time-evolving networks is to “flatten” the graph, i.e., to create a larger static graph by connecting the different time layers in a certain way. This is illustrated in Figure <ref>(l), where each diagonal block represents connections within a time layer and the off-diagonal blocks connections between different time layers. The eigenvalues and eigenvectors of these so-called supra-Laplacians <cit.> depend strongly on the coupling between time layers. Detecting community structures in time-evolving graphs is thus a challenging problem and dynamical systems theory may prove beneficial in developing spectral clustering methods and also suitable benchmark problems and metrics to measure the efficacy of different methods. § CONCLUSION The main goal of this perspective article was to provide an overview of transfer operators as well as their properties and manifold applications. We have shown that it is possible to apply dynamical systems theory to graphs, but also that graph theory can be used to gain insights into characteristic properties of dynamical systems, in the hope that this will inspire further research in this interdisciplinary area. There are many interesting and challenging open problems: Since the behavior of time-evolving graphs is much more complicated—clusters can, for instance, merge and split or disappear and reappear—, a rigorous mathematical definition of clusters or community structures along with efficient and robust clustering algorithms and meaningful metrics for comparing the results are essential. The supra-Laplacian seems to be a promising approach. However, depending on the number of time layers, this can result in extremely large graphs, which may be prohibitively expensive to analyze. Furthermore, we then need to be able to distinguish between different types of clusters since some eigenvectors simply cluster the time layers of the graph into different groups but do not contain any information about individual vertices. Extending the approaches presented above to adaptive (co-evolving) systems and networks introduces a new set of challenges. These systems are characterized by feedback loops where changes in one component affect others and the overall behavior of the system, leading to nonlinear dynamics. These effects are often seen in real-world applications such as biological and socio-economic systems. Some examples include temporal changes of the energy landscape that are driven by environmental influences or network structure that is evolving according to the status of nodes (sick/healthy). Understanding interactions within feedback loops is essential for modeling emergent phenomena, such as cluster appearance. Questions on how clusters evolve in time and if the notion of a cluster needs to be extended (e.g., to structural network clusters and status network clusters), will be a topic of future research. However, this is not easy due to the complexity and dynamic nature of these systems that are often characterized by multiple timescales, leading to complex dynamics. Another open question is whether it is possible to extend transfer operators and the resulting clustering methods to hypergraphs, simplicial complexes, or graphons. Furthermore, given a networked dynamical system where each vertex represents a deterministic or stochastic process, the underlying graph structure will have an impact on spectral properties of associated transfer operators. Local or global symmetries, for instance, must be reflected in the eigenfunctions. Understanding these relationships might help us develop more efficient and accurate numerical methods for analyzing such interconnected systems. We also typically assume that we can observe the full state of coupled dynamical system, but this might be infeasible in practice. The question then is if we can infer global information about the system or its connectivity structure using only local or partial observations. § ACKNOWLEDGMENTS We would like to thank Ginestra Bianconi for the invitation to share our perspective on the interplay between dynamical systems and complex networks. unsrturl § PROPERTIES OF TRANSFER OPERATORS The following properties can be used to show that the spectra of the forward–backward and backward–forward operators are real-valued. It holds that 𝒯 uf_ν = u𝒦 f_μ. We have 𝒯^τ uf_ν = _𝕏1/ν(x)_𝕏 p_τ(y, x) μ(y) u(y) d y f(x) ν(x) dx = _𝕏 u(y) _𝕏 p_τ(y, x) f(x) dx μ(y) d y = u𝒦^τ f_μ. For the graph case, this can be shown in an analogous fashion, see <cit.>. It follows that ℱ is self-adjoint w.r.t. the μ-weighted inner product and ℬ w.r.t. the ν-weighted inner product. Furthermore, ℱ and ℬ are positive semi-definite. Using the definition of ℱ, we obtain ℱ u_1u_2_μ = 𝒦𝒯 u_1u_2_μ = 𝒯 u_1𝒯 u_2_ν = u_1𝒦𝒯 u_2_μ = u_1ℱ u_2_μ. In particular, ℱ uu_μ = 𝒯 u_ν^2 ≥ 0. The results for the backward–forward operator follow in the same way. The spectral radii of ℱ and ℬ are 1. We show only the proof for ℱ^τ. Let 1 denote the function that is one everywhere, then 𝒦^τ1 = 1 since p_τ(x, ·) is a probability density. Similarly, 𝒯^τ1 = 1/ν𝒫^τμ = 1 by construction. It was shown in <cit.> that then 𝒯^τ = 𝒦^τ≤ 1 and ℱ^τ = 𝒦^τ𝒯^τ≤𝒦^τ𝒯^τ≤ 1, but ℱ^τ1 = 𝒦^τ𝒯^τ1 = 1 so that ℱ^τ = 1. Thus, also the spectral radius is 1. The proof for the graph case can be found in <cit.>.
http://arxiv.org/abs/2405.09035v1
20240515020434
Demonstrating a universal logical gate set on a superconducting quantum processor
[ "Jiaxuan Zhang", "Zhao-Yun Chen", "Yun-Jie Wang", "Bin-Han Lu", "Hai-Feng Zhang", "Jia-Ning Li", "Peng Duan", "Yu-Chun Wu", "Guo-Ping Guo" ]
quant-ph
[ "quant-ph" ]
Key Laboratory of Quantum Information, Chinese Academy of Sciences, School of Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China CAS Center For Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui, 230088, P. R. China Institute of the Advanced Technology, University of Science and Technology of China, Hefei, Anhui, 230088, P. R. China Key Laboratory of Quantum Information, Chinese Academy of Sciences, School of Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China CAS Center For Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China Key Laboratory of Quantum Information, Chinese Academy of Sciences, School of Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China CAS Center For Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China Key Laboratory of Quantum Information, Chinese Academy of Sciences, School of Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China CAS Center For Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China pengduan@ustc.edu.cn Key Laboratory of Quantum Information, Chinese Academy of Sciences, School of Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China CAS Center For Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China wuyuchun@ustc.edu.cn Key Laboratory of Quantum Information, Chinese Academy of Sciences, School of Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China CAS Center For Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui, 230088, P. R. China gpguo@ustc.edu.cn Key Laboratory of Quantum Information, Chinese Academy of Sciences, School of Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China CAS Center For Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui, 230088, P. R. China Origin Quantum Computing Hefei, Anhui 230026, P. R. China 03.65.Ud, 03.67.Mn, 42.50.Dv, 42.50.Xa Fault-tolerant quantum computing (FTQC) is essential for achieving large-scale practical quantum computation. Implementing arbitrary FTQC requires the execution of a universal gate set on logical qubits, which is highly challenging. Particularly, in the superconducting system, two-qubit gates on surface code logical qubits have not been realized. Here, we experimentally implement logical CNOT gate as well as arbitrary single-qubit rotation gates on distance-2 surface codes using the superconducting quantum processor Wukong, thereby demonstrating a universal logical gate set. In the experiment, we design encoding circuits to prepare the required logical states, where the fidelities of the fault-tolerantly prepared logical states surpass those of the physical states. Furthermore, we demonstrate the transversal CNOT gate between two logical qubits and fault-tolerantly prepare four logical Bell states, all with fidelities exceeding those of the Bell states on the physical qubits. Using the logical CNOT gate and an ancilla logical state, arbitrary single-qubit rotation gate is implemented through gate teleportation. All logical gates are characterized on a complete state set and their fidelities are evaluated by logical Pauli transfer matrices. Implementation of the universal logical gate set and entangled logical states beyond physical fidelity marks a significant step towards FTQC on superconducting quantum processors. Demonstrating a universal logical gate set on a superconducting quantum processor Guo-Ping Guo May 20, 2024 ================================================================================= § INTRODUCTION Quantum computing holds the promise to accelerate classical computing in various applications such as large number factorization <cit.>, quantum simulation <cit.>, and machine learning <cit.>. However, physical qubits are typically very fragile and are easily disturbed by environmental noise. To address the noise issues in large-scale quantum computing, quantum error correction techniques have been proposed, which introduce redundant information and encode quantum states onto logical qubits to ensure fault tolerance <cit.>. In recent years, multiple experiments across various quantum computing platforms have demonstrated the memory of quantum information on logical qubits. These experiments are based on hardware systems encompassing superconducting <cit.>, ion trap <cit.>, neutral atom <cit.>, and other systems <cit.>. Particularly in experiments using bosonic codes, it has been demonstrated that the quality of logical qubits can exceed the so-called break-even point <cit.>, validating the effectiveness of quantum error correction techniques in suppressing quantum noise. Furthermore, to achieve fault-tolerant quantum computing (FTQC), a set of logical gates needs to be implemented. The simplest approach to implement logical gates is transversally, where all physical qubits have interacted with at most one physical qubit from each logical block, therefore naturally ensuring fault-tolerance. However, a well-known theorem states that no quantum code can simultaneously promise a transversal and universal logical gate set <cit.>. For instance, in the surface code, the CNOT gate is transversal. While some single-qubit rotation gates, such as the S gate and T gate, typically need to be implemented indirectly using gate teleportation circuits with ancilla logical states <cit.>. Currently, more and more experimental works are focusing on demonstrations of logical gates of various quantum error correction codes <cit.>. For instance, in neutral atom systems, demonstrations of the CNOT, CZ, and CCZ gates have been achieved on the [[8,3,2]] color code <cit.>. In ion trap systems, the H, S, T, and CNOT gates have been demonstrated on the Steane code <cit.>, forming a universal gate set. While in solid-state systems represented by superconducting system, experimental demonstrations of logical gates are still quite limited. In these systems, the surface code is the most attractive encoding scheme due to its theoretically high threshold and practically nearest-neighbor connectivity requirements <cit.>. Ref. <cit.> demonstrated a universal set of single-qubit gates on the distance-2 surface code in superconducting systems, showing the potential of using surface code logical qubits for FTQC in the superconducting quantum processor. The main limitation of their work is the lack of two-qubit logical operations, thus not constituting a complete universal gate set. Additionally, the ancilla quantum states used in gate teleportation are physical states rather than logical states, which is inconsistent with the requirements in FTQC. To the best of our knowledge, no work has yet implemented a complete universal set of logical gates in either the superconducting system or the surface code encoding. In our work, we use the distance-2 surface code (Fig. <ref>a) to implement a complete set of universal logical gates, including arbitrary single-qubit rotations around the Z or X axis and the CNOT gate, filling the gap in current literature. In the experiment, we encode two logical qubits in a 2×4 qubit region of a superconducting quantum processor (see Fig. <ref>b and Supplemental Material). The logical CNOT gate is implemented transversally, i.e., by performing four CNOT gates between the corresponding physical qubits. Additionally, single-qubit rotation gates are implemented by preparing the ancilla logical states and applying gate teleportation circuit, which consists of a logical CNOT gate and logical X or Z measurement on the ancilla qubit. The logical Pauli transfer matrices (LPTMs) of these logical gates are characterized on a complete set of states, according to which the gate fidelities are evaluated and listed in Tab. <ref>. Using fault-tolerant logical state encoding circuits and transversal CNOT gates, we also prepared four logical Bell states. It is worth noting that in the experiment, all fault-tolerantly prepared logical states, including single-qubit states and Bell states, exhibit higher fidelity than the results on corresponding physical qubits (see Tab. <ref>), marking a successful integration of logical gates based on a current quantum device. Furthermore, our work also has a pioneering significance for surface-code based FTQC on future superconducting quantum platforms. In particular, we demonstrated the implementation of a transversal CNOT gate in surface codes, which could be particularly useful. Theoretical work suggests that combining transversal CNOT gates with two-dimensional (2-D) operations has the potential to significantly reduce the space-time overhead of FTQC on surface codes  <cit.>. Typically, the transversal CNOT gate for surface codes requires a three-dimensional (3-D) stacked architecture or 2-D architecture with long-distance couplings. Although the superconducting quantum processor we used is still a 2-D architecture with neighbor connectivity, some forward-looking research focusing future of quantum computing has explored superconducting quantum chip with multi-layer architecture and long-distance couplings <cit.>. Experimental demonstrations of these prototypes suggest a promising trajectory toward the full realization in near future <cit.>, anticipating the application for FTQC based on surface codes in the future. § RESULTS §.§ Logical state preparation and measurement The logical qubit of distance-2 surface code is encoded on four data qubits and is capable of detecting any single-qubit errors. Its code space is the +1 eigenspace of the following stabilizer group: 𝒮 = ⟨ X_1X_2X_3X_4, Z_1Z_2, Z_3Z_4⟩ . Then the logical Pauli operators are defined as: Z_L = Z_1Z_3, X_L = X_3X_4. Accordingly, the explicit form of the logical state can be written as: |0_L⟩ = 1/√(2)(|0000⟩+|1111⟩), |1_L⟩ = 1/√(2)(|0011⟩+|1100⟩), and |±_L⟩ = 1/√(2)(|0_L⟩±|1_L⟩). Here, we designed circuits for preparing the logical states |0_L⟩, |1_L⟩, |+_L⟩ and |-_L⟩ fault-tolerantly (see Fig. <ref>), whose fault tolerance is proven in Methods. In order to simultaneously ensure fault-tolerant preparing and implementation of transversal CNOT gates between |±_L⟩ and |0/1_L⟩ states, we adopt the qubit allocation scheme depicted in Fig. <ref>a and <ref>b. The key is that, we exploit the property that |±_L⟩ can be decomposed into product states (|±_L⟩=1/2(|00⟩±|11⟩)^⊗2), and encode |±_L⟩ on the leftmost two (q1 and q5) and the rightmost two physical qubits (q4 and q8) in the hardware. Moreover, we also provide a circuit for preparing arbitrary logical state |ψ_L⟩ in Fig. <ref>c. Generally, such a circuit for encoding arbitrary logical state is not fault-tolerant, neither does this circuit. In this way, a logical state can be encoded on a chain of four physical qubits (q1-q4) with only nearest-neighbor coupling. After preparing the logical states, logical X, Y, or Z measurements are performed to characterize these states. Their measurement results are determined by the product of the corresponding Pauli operator measurement result on each data qubits. The logical X and Z measurements are fault-tolerant and correspond to measurements in the X and Z bases on all data qubits, respectively. Post-selection is carried out based on the conditions provided by the three generators of the stabilizer group, discarding results that violate these conditions. Specifically, assuming the X or Z measurement result on the ith data qubit is m_i^x or m_i^z∈{+1,-1} the post-selection conditions are m_1^xm_2^xm_3^xm_4^x=+1, and m_1^zm_2^z=+1, m_3^zm_4^z=+1 for logical X and Z measurements, respectively. On the other hand, measurement of the logical Y operator Y_L=Z_1Y_3X_4 is not fault-tolerant. It requires Z measurements on data qubits D1 and D2, a Y measurement on D3, and an X measurement on D4. The corresponding post-selection condition is m_1^zm_2^z=+1. In this case, post-selection cannot eliminate all single-qubit error cases but can suppress some of them. Here, we conduct experimental demonstrations and characterizations on the fault-tolerantly prepared |0/1_L⟩, |±_L⟩ states, and non-fault-tolerantly prepared |0/1_L⟩ states. Through logical quantum state tomography, we constructed the density matrix ρ_L in the code space, as shown in Fig. <ref> (d-f). Furthermore, we computed the fidelity of the logical state: F_L = ⟨ψ_L | ρ_L | ψ_L ⟩, where |ψ_L⟩ is the ideal logical quantum state. The fidelities of the fault-tolerantly prepared states |0_L⟩, |1_L⟩ and |+_L⟩, |-_L⟩, as well as the non-fault-tolerantly prepared states |0_L⟩ and |1_L⟩, are 97.89%, 97.97%, 97.70%, 97.75%, 89.15%, and 88.83%, respectively. We also computed the fidelities of the |0⟩,|1⟩ and |+⟩,|-⟩ states prepared on the eight physical qubits in the experiment using physical state tomography. For a fair comparison, we did not use readout error mitigation techniques <cit.> during the physical state tomography. The highest values among eight physical qubits are 96.86%, 90.78% and 94.79%, 93.57% for |0⟩,|1⟩ and |+⟩,|-⟩, respectively. All these values are lower than the fidelities of the fault-tolerantly prepared logical states, demonstrating the noise-suppressing effect of the fault-tolerant preparation circuits. §.§ Logical CNOT gate and Bell states Next, our experiment demonstrates a transversal CNOT gate between two surface code logical qubits (see Fig. <ref>a and <ref>b). Initially, two logical states |ψ_L⟩ and |φ_L⟩, are prepared on two chains of the quantum processor (q1-q4 and q5-q8), where |ψ_L⟩ and |φ_L⟩ are from a complete state set {|+_L⟩,|-_L⟩,|0_L⟩,|i_L⟩}. Here |i_L⟩=(|0_L⟩+i|1_L⟩)/√(2) is the +1 eigenstate of the logical operator Y_L. This step is realized by the preparation circuit for arbitrary logical states described in the previous section. The density matrices of the initial logical states are characterized by logical state tomography. Subsequently, a transversal CNOT gate is applied to the initial logical states, and the output states are characterized using logical state tomography. Based on the expectation values of two-qubit Pauli operators of the initial and output states, we extract the LPTMs using the method presented in Ref. <cit.>. The fidelity of the logical CNOT gate, as computed from the LPTM, is found to be F_L^G=88.90%. Details concerning the LPTM and fidelity calculation are presented in Methods and Supplemental Material. Then we use the logical CNOT gate to prepare four Bell states on logical qubits, which are important entangled resources in quantum information. Following the above initialization method, the control and target logical qubits can be initialized to |±_L⟩ and |0/1_L⟩ states, respectively. Then they can be acted by a logical CNOT gate to generate a Bell state. However, under such qubit allocation, the prepared |0/1_L⟩ state is not fault-tolerant. Therefore, we adopt the qubit allocation scheme from the previous section to simultaneously fault-tolerantly prepare the |0/1_L⟩ and |±_L⟩ states (see Fig. <ref>c). This circuit can be viewed as a special planarization of a two-layer stacked architecture. In this layout, all physical CZ gates required in both the logical state preparation and the transversal CNOT gate implementation are 2-D hardware-neighbor. We reconstruct the density matrix of the logical Bell states in Fig. <ref> (d-g). The fidelities of the four logical Bell states are 79.46%, 79.50%, 79.35%, and 79.39%, respectively. Correspondingly, we prepare four physical Bell states by physical CNOT gate on qubits q6 and q7. The fidelity of the CNOT gate between q6 and q7 is the highest among all physical CNOT gates in the experiment. The fidelities of the four physical Bell states are 74.38%, 74.19%, 74.45%, and 74.17% respectively, all of which are lower than the fidelity of the fault-tolerantly prepared logical Bell states. §.§ Logical single-qubit rotation Finally, we demonstrated logical single-qubit rotations around the Z or X axis based on gate teleportation circuit (Fig. <ref>a). More specifically, these rotation operations are R_Z (θ)=e^-iθ Z_L/2, R_X (θ)=e^-iθ X_L/2, where θ is the rotation angle. The gate teleportation circuit consists of three parts. First, preparing the ancilla states |θ^z_L ⟩ = 1/√(2)(|0_L⟩ + e^iθ |1_L⟩), |θ^x_L ⟩ = cosθ/2 |0_L⟩ - isinθ/2 |1_L⟩. Then the logical CNOT gate is applied, and finally, ancilla state is measured in logical Z or X basis. The R_Z (θ) or R_X (θ) gate is successfully executed only when the logical Z or X measurement results in +1; otherwise, operation R_Z (2θ) or R_X (2θ) needs to be applied as a compensation. Here, we simply use the post-selection strategy, that is, only retaining the cases where the measurement result is +1. Note that the ancilla states can be viewed as the result of applying R_Z (θ) or R_X (θ) gates to |+_L⟩ or |0_L⟩, respectively, that is why we refer to this circuit as gate teleportation circuit. In the experiment, we first prepare the required ancilla logical states |θ^z_L ⟩ and |θ^x_L ⟩ with θ∈ (-π, π] on a chain of the quantum processor (q1-q4). Then these input states are measured in X_L, Y_L or Z_L basis to obtain the expectation values of the logical Pauli operators. Subsequently, we execute the circuits in Fig. <ref>b and <ref>c, demonstrating the single-qubit rotation gates around the Z or X axis on the state |ψ_L⟩=|+_L⟩ or |0_L ⟩, respectively. The expectation values of the logical Pauli operators for the input and output states are shown in Fig. <ref> (d-g). Using the expectation values ⟨ X⟩, ⟨ Y⟩, ⟨ Z⟩, we reconstructed the density matrices, thereby calculating the fidelity of each state. The average fidelities of input states |θ^z_L ⟩ and |θ^x_L ⟩ are evaluated to be 89.00% and 88.95%, respectively. Correspondingly, the average fidelities of the output states are 77.94% and 75.01%, respectively. To characterize the fidelity of the single-qubit logical gates, it is required to construct the LPTMs of these gates. Here, we test the LPTMs of R_Z(θ) and R_X(θ) with θ∈{0,π/4,π/2,π} as examples. The input states are encoded as the logical states from the set {|+_L⟩, |-_L⟩, |0_L⟩, |i_L⟩}, and the above logical gates are applied separately. We measure the expectation values of the Pauli operators for the input and output states and construct the LPTMs for these eight logical gates accordingly (see Methods and Supplemental Material). The fidelities F_L^G of these eight logical gates are estimated to be 94.39%, 90.00%, 87.40%, 93.87%, and 92.13%, 90.67%, 89.55%, 92.36%, respectively. § DISCUSSION This work experimentally demonstrates a complete universal set of logical gates on distance-2 surface code in a superconducting system. Particularly, logical Bell states with fidelity superior to physical Bell states have been fault-tolerantly prepared using the transversal CNOT gate. Based on the logical CNOT gate, the gate teleportation process is experimentally demonstrated to implement single-qubit rotation operations. These results represent significant milestones for fault-tolerant quantum computation based on the surface code in superconducting hardware. The dominant noise in the experiment is the readout noise and two-qubit gate noise, which significantly affect the implementation and characterization processes of logical gates. In addition, in the implementation of single-qubit rotation gates, the fidelity of the logical gates largely depends on the quality of the ancilla logical states in the gate teleportation circuit. In our experiment, the ancilla logical states are generated by non-fault-tolerant preparation circuits, resulting in a relatively high error rate. In a complete FTQC framework, high-fidelity ancilla logical states are typically obtained through state distillation <cit.>. A future challenge is to experimentally demonstrate these distillation protocols. Improvements in these aspects are expected to improve the fidelity of logical gates. On the road towards surface-code-based FTQC, another future milestone is to incorporate the repeated stabilizer measurement process into our work. If compatibility with the transversal CNOT gate is required, this process typically demands a 3-D stack structure or long-range entangling gates. This is particularly challenging in superconducting systems but not impossible. Due to the requirements of these techniques in FTQC, they are increasingly gaining attention. Some experiments have already demonstrated prototypes of these techniques <cit.>, proving that they are not beyond reach. We also notice that if confined to a 2-D structure, logical operations on surface codes also can be realized through lattice surgery <cit.>. However, as the number and scale of logical qubits expand, the overhead increases when implementing long-range logical operations using lattice surgery <cit.>. A promising approach is a hybrid method combining lattice surgery with transversal CNOT gates, as proposed in Ref. <cit.>. According to their analysis, this scheme can reduce the overhead of certain key steps in FTQC (e.g., magic state distillation) by 2 orders of magnitude. Although they originally focus on neutral atom, silicon spin or trapped ion systems, this scheme can also be realized through 3-D architectures or long-range connectivity in superconducting systems to effectively reduce the overhead of FTQC. In summary, our work is a pioneer for these promising possibilities in future works on surface-code-based FTQC. § METHODS §.§ Fault-tolerant logical state preparation Here we prove that the circuits in first two circuits of Fig. <ref>a and <ref>b are fault-tolerant, meaning that no single-qubit error at any position can lead to a logical error. For ease of discussion, we combine the H gates and CZ gates in the circuit into CNOT gates and focus on |0_L⟩ and |+_L⟩ state preparations, resulting in the circuit shown in Fig. <ref>. Note that this step does not change these fault-tolerance of the original circuits. For the |0_L⟩ state preparation circuit, we only need to consider the Pauli X errors in the circuit, as any possible logical error Z_L produced is trivial for the |0/1_L⟩ state up to a global phase. We mark the positions of all possible spreading Pauli X errors in the circuit (blue X in Fig. <ref>a). The leftmost X error spreads as X_1X_2X_3X_4, which is a stabilizer. The second and third X errors spread as X_2X_3 and X_1X_4, respectively, which anti-commute with the stabilizers Z_1Z_2 and Z_3Z_4, and will therefore be detected. This proves that no Pauli X error at any position in the circuit can spread to be X_L. Likewise, let us consider the Pauli Z errors in the |+_L⟩ state preparation circuit. The two possible spreading Pauli Z errors (yellow Z in Fig. <ref>b) spread as Z_1Z_2 and Z_3Z_4, which are the two stabilizers of this code. Thus, the fault-tolerance of these two encoding circuits have been proven. §.§ Logical Pauli transfer matrix (LPTM) The Pauli transfer matrix (PTM) describes a quantum process on the components of the density matrix represented in the basis of Pauli operators <cit.>. For a d-dimensional Hilbert space, a PTM ℛ is a linear transformation matrix from the expectation values p_i = ⟨ P_i ⟩ of the Pauli operators P_i in the input state to the expectation values p'_j in the output state: p'_j = ∑_i ℛ_ij p_i. In our experiment, P_i belongs to {I_L,X_L,Y_L,Z_L}^⊗ 2 and {I_L,X_L,Y_L,Z_L} for the cases d = 4 and d = 2, respectively. To construct the LPTMs of the logical quantum gates in the main text, we use input states from the complete set {|+_L⟩, |-_L⟩, |0_L⟩, |i_L⟩}^⊗ 2 (for the logical CNOT gate) or {|+_L⟩, |-_L⟩, |0_L⟩, |i_L⟩} (for the logical single-qubit gates). The density matrices of the input and output states are obtained through logical state tomography, and the expectation values p_i and p'_j are then calculated. The inverse of the expectation value matrix yields the raw result ℛ^raw. However, ℛ^raw may not satisfy the conditions of a physical channel, i.e., being completely positive and trace-preserving <cit.>. Therefore, using the techniques in Ref. <cit.>, ℛ^raw is transformed into the Choi state representation: ρ_choi = 1/d^2∑_ijℛ^raw_ij P^T_j ⊗ P_i. We then optimize ρ under the following objective function and constraints: minimize ∑_i,j| Tr(ρ P^T_j ⊗ P_i) - ℛ^raw_ij|^2, subject to ρ≥ 0, Tr(ρ) = 1, Tr_1(ρ) = 1/21, where Tr_1 is the partial trace over the left half subsystem. Using the convex optimization package cvxpy, we obtain the optimal result ρ_opt. The corresponding LPTM ℛ is ℛ_ij = Tr(ρ_opt P^T_j ⊗ P_i) and the fidelity of the logical gate is F_L^G = Tr(ℛ^†ℛ_ideal) +d/d^2 + d, where ℛ_ideal is the ideal LPTM of the logical gate. In our experiment, we constructed the LPTMs for the logical CNOT gate and eight logical single-qubit gates. The specific details of these LPTMs can be found in Supplementary Material Fig. <ref> and Fig. <ref>. §.§ Quantum state tomography Quantum state tomography <cit.> reconstructs the density matrix of an unknown quantum state by measuring some observables. In our experiment, we measure 4^n-1 Pauli operators of the logical qubits, where n is the number of logical qubits. Assuming the expectation values of these Pauli operators are p_i = ⟨ P_i ⟩, where P_i ∈{I_L,X_L,Y_L,Z_L}^⊗ n / {I_L^⊗ n}, the density matrix is reconstructed as: ρ_L,0 = ∑_i=0^4^n-1p_i P_i/2^n, with p_0 = 1 and P_0 = I_L^⊗ n. Such a density matrix ρ_L,0 may not satisfy the physicality characteristics of a quantum state. Therefore, we use maximum likelihood estimation <cit.> to construct the logical density matrix ρ_L. Specifically, the objective function to minimize is ∑_i | Tr(ρ_L P_i) - p_i |^2, subject to Tr(ρ_L) = 1, and ρ_L ≥ 0. This process is implemented also using the convex optimization package cvxpy. Likewise, we also apply state tomography to physical states for constructing the density operators of states |0⟩, |1⟩, |+⟩, |-⟩ and four Bell states, which is done for comparison with the logical state density matrices. These results are shown in Supplementary Material Fig. <ref>. § ACKNOWLEDGMENTS We thank Prof. Chang-Ling Zou and Prof. Ying Li for reviewing the manuscript and providing valuable suggestions, and thank Cheng Xue and Xi-Ning Zhuang for their assistance reviewing this manuscript. This work is supported by National Key Research and Development Program of China (Grant No. 2023YFB4502500) apsrev4-2 Supplemental Material for “Demonstrating a universal logical gate set on a superconductor quantum processor" § DEVICE The superconducting quantum processor Wukong consists of 72 transmon superconducting qubits (62 available) arranged with a square lattice topology. The basic physical gate set on this processor includes I, X, √(X), R_z(θ), and CZ, where R_z(θ) is implemented via virtual Z technique <cit.>. The execution time for I (or R_z(θ)), X (or √(X)) and CZ gate are 0 ns, 30 ns and 40 ns, respectively. Note that by decomposing H=R_z(π/2)√(X)R_z(π/2), the Hadamard gate can be executed within a single gate time (30 ns). In the experiment, we use a 2×4 qubit rectangular region whose positions are shown in Fig. <ref>. The single-qubit parameters and fidelities of CZ gates in region region are presented in Tab. <ref> and Tab. <ref>, respectively. All circuits in the experiment are submitted by a cloud platform and sampled over 5×10^4 times. We disabled circuit optimization and use barriers to fix the time order of the gates. In all circuits, the H-layer and CZ-layer are separated to avoid XY crosstalk <cit.>. When preparing two logical states simultaneously, we stagger the timing of one H-layer and one CZ-layer to avoid potential CZ crosstalk <cit.>. Additionally, the transversal CNOT gate is executed in two CZ layers, ensuring that CZ gates with high crosstalk do not execute in parallel. § LPTMS OF LOGICAL GATES AND PHYSICAL STATE CHARACTERIZATION § NUMERICAL SIMULATION Here, we numerically simulate the circuits in the main text by the Monte Carlo method. The depolarizing Pauli noise channels are used to simulate noise in the experiment circuits. Specifically, the depolarizing Pauli noise channels are defined as follows: ℰ_1(ρ_1) =(1-p_1)ρ_1+(p_1/3)∑_P∈{X,Y,Z}Pρ_1P, ℰ_2(ρ_2) =(1-p_2)ρ_2+(p_2/15) ×∑_P_1,P_2∈{I,X,Y,Z}, P_1⊗ P_2≠ I ⊗ IP_1⊗ P_2ρ_2P_1⊗ P_2, where ρ_1 and ρ_2 are single-qubit and two-qubit density matrices respectively. Here we set p_1 for each qubit as the single-qubit gate error from Tab. <ref>, and p_2 as the two-qubit gate infidelity from Tab. <ref>. In the simulated circuits, we apply ℰ_1 after state initialization, single-qubit gates, and idle operations, and ℰ_2 after two-qubit gates. Additionally, The measurement result flips with a probability of p_m=1- (F_00 + F_11) / 2, where F_00 and F_11 are from Tab. <ref>. We simulate the circuits in the Heisenberg representation <cit.>, randomly inserting Pauli noise with corresponding probability. Since the circuits are Clifford circuits, according to the Gottesman-Knill theorem, they can be efficiently simulated in polynomial time <cit.>. Corresponding to the results in the main text, we simulated circuits of logical state preparation, logical Bell state preparation, and logical single-qubit rotations. Each circuit was sampled at least 10^6 times. We performed the same post-selection process on the simulated and experimental data and calculate the fidelity. Fig. <ref> present the comparison of simulated results and the experimental results in the main text. Experimental and simulated data for the post-selection rates when measuring the eigenoperators of the respective logical state are also provided. Similarly, we simulated the preparation and measurement circuits for single-qubit states and Bell states of the physical qubits, as shown in Fig. <ref>. A slight difference is that the flip probability of measurement is p_m=1 - F_00 if the measured state is the |0⟩ state, otherwise it is 1 - F_11. It should be noted that in the simulation results, the fidelity of logical states still surpasses that of corresponding physical states, providing further evidence for the conclusions drawn in the main text. Next, we simulated the characterization circuits for the logical CNOT gate and logical single-qubit gates under three cases with different parameters. We processed the simulated data in the same way to construct LPTMs and thus obtained the fidelities of these logical gates. First, we simulated the circuit under the real experimental parameters (case 1). The dominant noise in our experiment comes from readout errors. As we used process tomography to characterize logical gates, the readout errors will significantly influence the characterization. To understand whether the current gate parameters have reached the pseudo-threshold point for logical gate implementation, we kept the gate parameters unchanged and set the measurements during the characterization part to be noiseless (case 2). The results showed that under these parameters, the fidelity of the logical CNOT gate approached that of the physical CNOT gate (97.02%). However, the fidelity of the logical single-qubit gate still fell far short of that of the physical single-qubit gate. Furthermore, as the fidelity of ancilla states can be improved through distillation, we simulated the gate teleportation circuits again with noiseless ancilla state preparation (case 3), and found that average fidelity of all logical single-qubit gates still remained lower than that of the logical CNOT gate. This implies that at the logical qubit level, the fidelity of single-qubit gates is possibly lower than that of two-qubit gates implemented transversely, showing results contrary to those at the physical qubit level. Therefore, the logical error rate under the FTQC framework is likely dominated by both single-qubit and two-qubit gates. All numerical simulation results are summarized in Fig. <ref>. These results provide a reference for future experimental work on implementing logical gates of surface codes, and for quantum algorithm design within the FTQC framework.
http://arxiv.org/abs/2405.09840v1
20240516063638
Impurity bands, line-nodes, and anomalous thermal Hall effect in Weyl superconductors
[ "Taiki Matsushita", "Naoyuki Kimura", "Takeshi Mizushima", "Ilya Vekhter", "Satoshi Fujimoto" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mtrl-sci", "cond-mat.str-el" ]
Department of Physics, Graduate School of Science, Kyoto University, Kyoto 606-8502, Japan Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan Department of Materials Engineering Science, Osaka University, Toyonaka, Osaka 560-8531, Japan Department of Materials Engineering Science, Osaka University, Toyonaka, Osaka 560-8531, Japan Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 Department of Materials Engineering Science, Osaka University, Toyonaka, Osaka 560-8531, Japan Center for Quantum Information and Quantum Biology, Osaka University, Toyonaka, Osaka 560-8531, Japan We investigate the anomalous thermal Hall effect (ATHE) in Weyl superconductors realized by the E_1u (p-wave and f-wave) chiral superconducting order for the point group D_6h. Using the quasiclassical Eilenberger theory, we analyze the influence of the impurity scattering and the line nodal excitations on the ATHE, and compare it with the intrinsic (topological) contribution. Because the transverse response is sensitive to the slope of the density of states at the Fermi surface, the extrinsic ATHE vanishes in both the weak (Born) and strong (unitarity) scattering limits. The thermal Hall conductivity (THC) is maximal at intermediate impurity strengths when there is a large slope of the density states in the impurity bands close to the Fermi energy. Under these conditions, the extrinsic ATHE dominates the intrinsic ATHE even at low temperatures. The extrinsic ATHE is sensitive to line nodal excitations, whereas the intrinsic ATHE is not. When the line nodes in the gap involve the sign change of the order parameter, the extrinsic contribution to the THC is suppressed even though the phase space for low energy excitation is large. In contrast, if the nodes are not accompanied by such a sign change, the extrinsic ATHE is significantly enhanced. Our results form a basis for the comprehensive analysis of anomalous thermal transport in Weyl superconductors. Impurity bands, line-nodes, and anomalous thermal Hall effect in Weyl superconductors Satoshi Fujimoto May 20, 2024 ====================================================================================== § INTRODUCTION Weyl superconductors (WSCs) are time-reversal symmetry broken (TRSB) superconductors whose low-energy excitations behave as Weyl quasiparticles <cit.>. They are realized when the superconducting condensate consists of Cooper pairs with a fixed orbital angular momentum and thus are described by a complex order-parameter, Δ( k) ∝ (k_x± ik_y)^ν (ν∈ℤ). On a three-dimensional Fermi surface centered on the Γ point, the gap closing points at k_x=k_y=0 become Weyl nodes, which are sources and drains of Berry flux <cit.>. As a result, the near-nodal Bogoliubov quasiparticles behave as Weyl particles. It is common to both refer to such order parameters and label the corresponding ground states as chiral <cit.>. Generically, chiral superconductors with a three-dimensional Fermi surface are good candidates for the realization of WSCs. Among the phenomena that experimentally identify WSCs are the anomalous thermal Hall effect (ATHE), a transverse thermal current driven by a temperature gradient without an applied magnetic field, and the chiral anomaly-induced phenomena, such as the torsional chiral magnetic effect and the negative thermal magnetoresistivity by textures of the order parameters <cit.>. In particular, the observation of the ATHE unambiguously identifies the chiral ground states. At the microscopic level, there are two sources for a finite thermal Hall signal in chiral superconductors: intrinsic and extrinsic. The intrinsic mechanism is due to the geometric phase from the Berry curvature and characterized by the positions of Weyl nodes in the momentum space <cit.>. The extrinsic mechanism is due to impurities and to the transfer of the angular momentum between the condensate and the quasiparticles during scattering events <cit.>, which contributes both to energy-dependent skew scattering of quasiparticles on impurities and the Andreev (inter-branch, electrons to holes and vice versa) scattering. The skew scattering directly couples to the temperature gradient for |ν|=1 and results in the ATHE signal <cit.>. The Andreev mechanism only couples to impurities if the impurity potential is non-s-wave, with scattering matrix elements coupling different angular-momentum channels, and therefore appears for finite size impurities, dominating the response for |ν|≥ 2 <cit.>. In this paper, we address two aspects of the ATHE in chiral superconductors with point-like impurities. First, we elucidate its origin and demonstrate the relation between the extrinsic ATHE and the evolution of the density of states (DOS) in the sub-gap impurity band, which arises from the broadening of the impurity resonant states <cit.>. In particular, the particle-hole anisotropy, which is necessary for transverse transport, sensitively depends on the scattering phase-shift at individual impurities <cit.>. Second, we focus on the magnitude of the thermal Hall conductivity (THC) in the situation where the winding number, ν, is not the same as the total angular momentum of the Cooper pairs, l. Such a situation often occurs when chirality coexists with additional nodes in the gap function. Our results are relevant to several candidates of WSCs. Historically, the Anderson-Brinkmann-Morel (ABM) state in ^3He was the first well-established Weyl superfluid with the chiral p-wave pairing <cit.>. In superconducting materials, the chiral ground states over three-dimensional Fermi surfaces were proposed in a number of materials including URu_2Si_2, UPt_3, U_1-xTh_xBe_13 and SrPtAs. <cit.> The chiral d-wave pairing, Δ( k)∝ k_z(k_x± ik_y), was proposed in URu_2Si_2 and explains the giant Nernst effect above the critical temperature due to preformed chiral Cooper pairs. <cit.> Uranium compounds UPt_3 and U_1-xTh_xBe_13 show spin-triplet superconductivity and multiple superconducting phases as a function of tuning parameters. <cit.> While the exact order parameter symmetry in these materials is still a subject of debate, TRSB was observed in the so-called B-phase, suggesting chiral superconducting nature <cit.>. In general, ferromagnetic superconductors show a complex “nonunitary” order parameter and support Weyl nodes on the three-dimensional Fermi surface <cit.>. Consequently, ferromagnetic UCoGe, URhGe and UGe_2 are also good candidates for the realization of WSCs <cit.>. We employ the quasiclassical method which is a hierarchical expansion in 1/(k_ Fξ_0)∼ T_ c/T_ F≪ 1, where k_ F is the Fermi momentum, ξ_0=v_ F/(2π T_ c) is the superconducting coherence length, and T_ c and T_ F are the superconducting transition and the Fermi temperature respectively. Throughout the paper, we set k_ B=ħ=1 for simplicity. In this language, the extrinsic contribution is of leading order, while the intrinsic mechanism is smaller by 1/(k_ Fξ_0). Hence, understanding the extrinsic contribution is essential for correctly interpreting future experiments measuring the ATHE. However, while the intrinsic ATHE has been widely investigated in the context of topological material science <cit.>, the studies of the extrinsic ATHE have been very limited. Our work thus fills this gap for Weyl superconductors. The rest of this paper is organized as follows. We introduce the model of WSCs in Sec. <ref> and present the expression of the THC in Sec. <ref>. In Sec. <ref>, we review the quasiclassical transport (Eilenberger) theory. In Sec. <ref>, the nonequilibrium Green function is calculated. The result of the low temperature analysis is given in Sec. <ref>. In Sec. <ref>, we use the result of the low temperature analysis to connect the extrinsic ATHE and the formation of the impurity band. In Sec. <ref>, the influence of line nodal excitations on the ATHE is discussed. Section <ref> is devoted to a summary and conclusion. Detailed derivation of several more complex equations in the main text is given in the Appendices. § MODEL In this paper, we consider the chiral superconducting order for the point group D_6h and short-range impurities approximated by a delta-function potential. Table. <ref> summarizes the irreducible representations and their basis functions exhibiting the chiral superconducting order. <cit.> Among them, the skew scattering does not couple to the E_1g, E_2g and E_2u chiral superconducting order with the δ-function type potential. <cit.> When impurities have finite size, the Andreev scattering couples to these chiral superconducting orders and contributes to the ATHE. Thus, the extrinsic mechanism is negligible in the E_1g, E_2g and E_2u chiral states when the impurity potential is short-range. The E_1u chiral superconducting order has the p-wave and f-wave pairings. As shown below, these pairing states couple to the skew scattering and cause the extrinsic ATHE. Hence, we consider the p-wave and f-wave pairings in the E_1u state on a spherical Fermi surface. The p-wave and f-wave chiral pairings in the E_1u state are expressed by, d(k̂)=Δ (k̂_x+ik̂_y) η(k̂_z)ẑ, with the normalized momentum k̂= k/k_ F. Here, ẑ represents the direction of the d-vector and k_ F is the radius of the Fermi sphere. The function η(k̂_z) has to be even η(k̂_z)=η(-k̂_z), to satisfy the requirement d(k̂)=- d(-k̂). For the chiral p-wave and f-wave pairing state, η(k̂_z) is given by, η(k_z)= 1 chiral p-wave pairing, 5k̂_z^2-1 chiral f-wave pairing. These E_1u chiral order parameters generates two Weyl nodes at the north and south poles on the Fermi sphere and realizes WSCs with ν=1. As Fig. <ref> shows, the E_1u chiral f-wave pairing involves the two horizontal line-nodes at k̂_z=± 1/√(5). These chiral order parameters are relevant to candidate materials of WSCs. The chiral p-wave pairing, d(k̂)=Δ (k̂_x± ik̂_y) ẑ, is established in the ABM state of the superfluid ^3He under an ambient pressure <cit.>. Although we consider the D_6h symmetry, the calculated result with the chiral p-wave pairing in Eq. (<ref>) is applicable to ^3He because we do not consider the crystalline structure except for the form of the order parameter. The E_1u chiral f-wave pairing is a candidate for the order parameter of UPt_3 <cit.>. § INTRINSIC ANOMALOUS THERMAL HALL EFFECT In chiral superconductors, there are intrinsic and extrinsic mechanisms for the ATHE. In this section, we present the expression of the intrinsic contribution to the THC and show its low temperature behavior is completely determined by the distribution of Weyl nodes in the momentum space. For the model presented in Sec. <ref>, two Weyl nodes are on the k_z axis and these are separated by δ k_ W=2k_ F in the momentum space. When the WSCs are regarded as a family of two-dimensional superconductors labeled by k_z, their topological nature is revealed. Each subsystem labeled by k_z is equivalent to a two-dimensional chiral superconductor and characterized by the Chern number defined as, Ch(k_z)=∫dk_xdk_y/2π∑_E_n( k)<0ℬ^n_xy( k), where E_n( k) is the quasiparticle energy in band n, and ℬ^n_xy( k)=-2 Im⟨∂ u_n( k)/∂ k_x| ∂ u_n( k)/∂ k_y⟩, is the Berry curvature. In the case of WSCs described by Eq. (<ref>) with the positive effective mass of the normal state, the momentum-dependent Chern number becomes, <cit.> Ch(k_z)=2Θ(k_ F^2-k_z^2), where Θ(x) is the Heaviside step function. The factor 2 in Eq. (<ref>) arises from the spin degrees of freedom. The intrinsic contribution to the thermal Hall conductivity is given by the Berry curvature formula <cit.>, κ_xy^ int=-1/2T∑_n ∫d k/(2π)^3∫_E_n( k)^∞ dϵϵ^2 ℬ^n_xy( k) (-∂ f/∂ϵ). At low temperatures, T≪ |Δ|, Eq. (<ref>) reduces to, κ_xy^ int=π T/6( δ k_ W/2π). Equation (<ref>) clarifies that, at low temperatures, the intrinsic ATHE is completely determined by the distribution of Weyl nodes in the momentum space <cit.>. Note that this low temperature formula is independent of η(k̂_z) and thus the intrinsic ATHE is insensitive to additional line nodal excitations in WSCs at least to leading order in T/T_c. To see that the intrinsic ATHE vanishes in the standard quasiclassical limit and does not appear in the Eilenberger framework <cit.>, let us scale the thermal conductivity by N(ϵ_ F)v_ F^2. We find κ_xy^ int/N(ϵ_ F)v_ F^2=π/12(k_ Fξ_0)(T/T_c). Equation (<ref>) shows that the intrinsic contribution appears only at the first order in 1/(k_ Fξ_0), whereas the the standard quasiclassical theory keeps only the terms of leading (zeroth) order in 1/(k_ Fξ_0) [see Eq. (<ref>)]. Hence, the intrinsic contribution to the ATHE drops out in the quasiclassical Eilenberger formalism  <cit.>, and we use the results above to compare with the extrinsic contribution to THC. However, this hierarchy does not mean that the extrinsic contribution is always dominant. Recall that physically, the intrinsic ATHE is due to the gapless surface Majorana modes, which always give a T-linear contributions at low temperature, regardless of the existence of nodal excitations in bulk <cit.>. When the intrinsic contribution is derived from the bulk Hamiltonian using transport theory, the effect of the surface Majorana mode is incorporated into the THC via the correction to the Kubo formula due to the thermal magnetization current <cit.>. In contrast, whether the extrinsic ATHE exhibits T-linear behavior in the low temperature range, and, if it does, how large this contribution is, depends on the existence of nodal excitations and the location of impurity bands near the Fermi energy, see Ref. <cit.> and our analysis below. Therefore, whether intrinsic or extrinsic contributions dominate at low T depends on the specifics of the material. § QUASICLASSICAL TRANSPORT THEORY §.§ Eilenberger equation The quasiclassical transport (Eilenberger) theory describes superconductors in the limit 1/(k_ Fξ_0)∼ T_ c/T_ F≪ 1. In this regime, the normal state DOS can be taken to be energy-independent over the range where the Gor'kov Green's function is peaked <cit.>. Integrating this Green function over the band kinetic energy, ξ_ k= k^2/2m-ϵ_ F, we define the quasiclassical Green function as, ǧ(ϵ, k_ F) = ∫ dξ_ kτ̌_z Ǧ(ϵ, k) = [ g^ R(ϵ, k_ F) g^ K(ϵ, k_ F); 0 g^ A(ϵ, k_ F) ], The quasiclassical Green function is defined at the Fermi surface and is the central object of the quasiclassical transport theory <cit.>. The superscript X=R, A, K in Eq. (<ref>) represents the retarded, advanced, and Keldysh matrix elements, respectively. τ̌_i (i=x,y,z) are the Pauli matrices in the Nambu (particle-hole) space. Throughout this paper, we denote Ǎ as a 8× 8 matrix in the Keldysh space and A as a 4× 4 Nambu matrix. If a matrix A (Ǎ) is defined in the Nambu (Keldysh) space, the corresponding matrix Ǎ (A) in the Keldysh (Nambu) space is defined as Ǎ = A⊗. The quasiclassical Green function obeys the Eilenberger equation <cit.>, [ϵτ̌_z-Δ̌-σ̌_ imp,ǧ]+i v_ F·∇ǧ=0, and is supplemented by the normalization condition, ǧ^2=-π^2. Δ̌ is the superconducting order parameter matrix. For spin-triplet superconductors with the d-vector, d( k), this order parameter matrix is given by <cit.>, Δ̌ = [ Δ 0; 0 Δ ], Δ = [ 0 i(σ· d( k_ F))σ_y; iσ_y(σ· d^*( k_ F)) 0 ]. where σ=(σ_x,σ_y,σ_z) is the vector of the Pauli matrices in the spin space. σ̌_ imp in Eq. (<ref>) represents the impurity self-energy, σ̌_ imp = [ σ_ imp^ R σ_ imp^ K; 0 σ_ imp^ A ]. As mentioned before, we consider short-range charge (scalar) impurities with the δ-function type potential, V_ imp( x)=∑_ R_ impV_ impδ( x- R_ imp), where R_ imp is an impurity site and V_ imp is the potential strength. The multiple scattering events are essential for the transverse transport and thus we compute the impurity self-energy with the self-consistent T-matrix approximation. Assuming the random distribution of impurities and taking the impurity average, we obtain the self-consistent T-matrix equation <cit.>, σ̌_ imp = n_ impť_ imp, ť_ imp = V_ imp +N(ϵ_ F)V_ imp⟨ǧ|_⟩ FSť_ imp. where n_ imp is the impurity density and N(ϵ_ F) is the DOS at the Fermi level, ϵ_ F, in the normal state. The bracket ⟨⋯|_⟩ FS represents the normalized Fermi surface average, ⟨1|_⟩ FS=1. The impurity self-energy is independent of the Fermi momentum due to the short-range nature of the impurity potential. Using the scattering rate Γ_ imp=n_ imp/π N(ϵ_ F) and the scattering phase-shift δ=-1/π N(ϵ_ F) V_ imp in the normal state to parameterize the impurity scattering, we recast the self-consistent T-matrix equation as σ̌_ imp=- [δ +⟨ǧ/π⟩_ FS]^-1Γ_ imp The limit δ→ 0 (δ→π/2) corresponds to the Born (unitarity) limit with the weak (strong) impurity potential. §.§ Response to Temperature Gradient The quasiclassical formalism can treat the response to the temperature gradient <cit.>. To involve the temperature gradient in the quasiclassical theory, we consider a local equilibrium, T=T( x), and expand the spatial gradient in Eq. (<ref>) as ∇=∇ T ∂/∂ T, [ ϵτ̌_z-Δ̌-σ̌_ imp,ǧ]+(i v_ F·∇ T) ∂/∂ Tǧ=0. The thermal current is given by the Keldysh component of the Green function, J_Q = N(ϵ_ F)∫dϵ/4π i⟨1/4 Tr[ϵ v_ Fg^ K]⟩_ FS , which we compute in the linear response theory, and then obtain the thermal conductivity tensor, J_Qi=κ^ ext_ij(-∂_j T). As shown in Sec. <ref>, we only obtain the extrinsic contribution to the THC in this framework. <cit.> Hence, we add the superscript “ext” in Eq. (<ref>) to represent the extrinsic (impurity-induced) contribution. Once the quasiclassical limit is taken, the anomalous velocity, the effective Lorentz force in the momentum space due to Berry curvature, and side-jump effects drop out from the equations for the integrated Green function <cit.>. As shown in Sec. <ref>, these geometric phase effects appear at first order in (k_ Fξ_0)^-1, and accounting for them requires gradient expansion beyond the standard quasiclassical theory. <cit.> Consequently, when we discuss the extrinsic ATHE with the Eilenberger equation, it originates from the skew-like scattering due to chiral Cooper pairs. When we compare the extrinsic ATHE with the intrinsic one, we refer to the low temperature formula [Eq. (<ref>)]. §.§ Quasiclassical Green function With the quasiclassical transport theory, we derive the nonequilibrium Green function, which includes a linear response to the temperature gradient. Everywhere below the equilibrium functions are denoted as x̌_ eq (x=g, Δ, σ_ imp) and their linear deviation from the equilibrium are labeled as δx̌ (x=g, Δ, σ_ imp). §.§.§ Equilibrium Green function In the absence of perturbations, the equilibrium Green function, ǧ_ eq, is obtained from Eq. (<ref>) with the impurity self-energy given in Eq. (<ref>). Because the Fermi surface average for the order parameter matrix vanishes, ⟨Δ( k_ F)|_⟩ FS=0 due to the odd in momentum gap function, the impurity self-energy is diagonal in the Nambu space. The equation is then easily solved to obtain g_ eq^ X = -πM^ X/D^ X for X=R, A, g_ eq^ K = ( g_ eq^ R-g_ eq^ A)tanh(ϵ/2T), where M^ X=ϵ̃^ Xτ_z-Δ_ eq, D^ X=√(| d( k_ F)|^2-ϵ̃^ X 2) and ϵ̃^X=ϵ-1/4 Tr(τ_zσ_ imp,eq^ X). §.§.§ Nonequilibrium Green function The nonequilibrium Green function, δǧ, describing the linear response to the temperature gradient obeys the Eilenberger equation, [ϵτ̌_z-Δ̌_ eq-σ̌_ imp, eq,δǧ] -[δΔ̌+δσ̌_ imp,ǧ_ eq] +i v_ F·∇ T ∂/∂ Tǧ_ eq=0. It is straightforward to solve this equation for the retarded and the advanced Green components. Using the normalization, {g^ X_ eq, δg^ X}=0 ( X=R, A), we obtain, δg^ X=g^ X_ eq/2π D^ X( [δσ^ X_ imp,g^ X_ eq]-(i v_ F·∇ T) ∂/∂ Tg^ X_ eq). The second term in Eq. (<ref>) only depends on the temperature variations of the gap function, which is negligible at low temperatures. It can be neglected on even more general grounds because it is traceless, and hence does not contribute to the thermal transport. Indeed, it is easy to see that Tr(g^ X_ eq∂/∂ Tg^ X_ eq)= 2 Tr[∂/∂ T (g^ X_ eq)^2]=0. For the Keldysh component, it is convenient to define the anomalous Keldysh Green function, δg^a, and the anomalous Keldysh impurity self-energy, δσ_ imp^a, δg^a = δg^ K-(δg^ R-δg^ A) tanh(ϵ/2T), δσ_ imp^a = δσ_ imp^ K-(δσ_ imp^ R-δσ_ imp^ A) tanh(ϵ/2T). The second term in Eq. (<ref>) describes the change in the spectral function, g^ R- g^ A while maintaining the distribution in equilibrium. The first term in this equation accounts for the nonequilibrium distribution function and is essential for evaluating the thermal transport. This separation of the nonequilibrium Keldysh functions allows us to solve the transport equation for the Keldysh component. We obtain, δg^a = δg^a_ ns+δg^a_ vc, δg^a_ ns = N^ R_ eq( g_ eq^ R-g_ eq^ A)(-i(ϵ v_ F·∇ T)/2T^2cosh^2(ϵ/2T)), δg^a_ vc = N^ R_ eq(g_ eq^ Rδσ_ imp^a- δσ_ imp^ag_ eq^ A), where we defined the retarded function N^ R_ eq=(D^ R+D^ A)(-g^ R_ eq/π)+σ_ imp,eq0^ R-σ_ imp,eq0^ A/(D^ R+D^ A)^2+( σ_ imp,eq0^ R-σ_ imp,eq0^ A)^2, with the trace of the equilibrium self-energy σ_ imp,eq0^ X= Tr(σ_ imp,eq^ X) (X=R,A). Note that δg^a_ vc involves the anomalous Keldysh impurity self-energy, which corresponds to the vertex correction in the diagrammatic calculations. We thus refer to δg^a_ ns as a non-selfconsistent contribution and δg^a_ vc as a vertex correction contribution, respectively. § THERMAL CONDUCTIVITY AT LOW TEMPERATURE As is seen above, the anomalous Keldysh Green function is proportional to the derivative of the Fermi distribution function, 1/cosh^2(ϵ/2T). This factor introduces the frequency cut-off ϵ∼ T, which becomes small at low temperatures, justifying the expansion of the Green function in ϵ <cit.>. We utilize this low temperature expansion to determine the low temperature behavior of the extrinsic ATHE in WSCs. Following the procedure outlined in Ref. <cit.>, we obtain the low temperature expansion for the thermal conductivity in WSCs as <cit.>, κ^ ext_yy/N(ϵ_ F)v_ F^2 ≃ π^2 T/6γ^2 ⟨α_0(k̂_z)|_⟩ FS +π^2 Γ_ impγ^2|Δ_ eq|^2T/3 Y⟨α_1(k̂_z)|_⟩ FS^2 + 𝒪(T^2,Γ_ imp^4), κ^ ext_xy/N(ϵ_ F)v_ F^2≃ -π^2 Γ_ impγ^2|Δ_ eq|^2T/3 X ⟨α_1(k̂_z)|_⟩ FS^2 +𝒪(T^2,Γ_ imp^4), with iγ≡ -1/4 Tr[τ_zσ_ imp,eq^ R(ϵ=0)]. In Appendix <ref>, we derive the complete expression for the thermal conductivity at the low temperature, including higher-order terms in the scattering rate, Γ_ imp. Note that the extrinsic contribution has the opposite sign of the intrinsic contribution when the impurity potential is attractive for electrons (0<δ<π/2) <cit.>. In Eqs. (<ref>)-(<ref>), the dimensionless factors X and Y are defined as, X = n_s(0) δ/2(^2 δ +n_s^2(0))^2 , Y = ^2 δ -n_s^2(0) /4(^2 δ +n_s^2(0))^2 , with n_s(ϵ)=N_s(ϵ)/N(ϵ_ F). N_s(ϵ) is the quasiparticle DOS, N_s(ϵ)=N(ϵ_ F)⟨ -1/4 Tr Im(g^ R_ eq(ϵ)/π)⟩_ FS . In Fig. <ref>, we plot the quasiparticle DOS in the E_1u chiral p-wave and f-wave states. The Fermi surface function α_n(k̂_z) (n∈ℤ), is defined as α_n(k̂_z) = k̂_⊥^2 η^n(k̂_z)/(γ^2 + |Δ_ eqk̂_⊥η(k̂_z)|^2)^3/2, with k̂_⊥^2=k̂_x^2+k̂_y^2=1-k̂_z^2. η(k̂_z) is defined in Sec. <ref> and describes the additional nodal and near-nodal structures. As shown in Fig. <ref> (a), α_0(k̂_z) is sharply peaked at the positions of the line-nodes on the Fermi surface, because the denominator in Eq. (<ref>) is small in these regions. Hence, as may be expected, low-energy quasiparticles near nodal lines enhance the longitudinal thermal transport. On the other hand, α_1(k̂_z) in Fig. <ref>(b) changes its sign across the line node due to η(k̂_z) in the numerator of Eq. (<ref>) (see also Fig. <ref>). As we show below, this sign change at line-nodes significantly affects the extrinsic ATHE. The factors X and Y stem from the vertex correction of the nonequilibrium Keldysh Green function and describe the nature of impurity bound states in superconductors <cit.>. Because only unpaired quasiparticles carry entropy, a finite residual (zero-energy) DOS is a necessary, but not sufficient, condition to obtain a finite THC [see Eq. (<ref>)]. As shown in Fig. <ref>(a), the THC vanishes both in the Born (δ→ 0) and unitarity (δ→π/2) limit, and exhibits a peak at intermediate the scattering phase-shift. The key observation is that the thermal Hall signal is enhanced by the effective DOS anisotropy at the Fermi surface, i.e., the finite slope of the DOS as a function of energy near ϵ=0. This slope vanishes in both unitarity and Born limit as shown in Fig. <ref>, but (∂ N_s/∂ϵ)_ϵ=0≠ 0 is realized in the intermediate range of the scattering phase-shift. As discussed in Sec. <ref>, the quasiclassical method does not capture the intrinsic contribution to the THC, and hence, the THC from Eq. (<ref>) vanishes in both limits. In Sec. <ref>, we clarify the relation between the extrinsic ATHE and the emergence of the impurity band. Importantly, even at low temperatures, the extrinsic contribution is comparable to the intrinsic contribution when δ≳ 0.75π (δ≳ 0.55π) in the chiral p(f)-wave state [see Fig. <ref>(a)]. § IMPURITY BOUND STATES AND ANOMALOUS THERMAL HALL EFFECT We now demonstrate explicitly how the extrinsic ATHE is closely related to the emergence of the impurity bound states in superconductors. Recall that in unconventional superconductors with a momentum-dependent gap function, multiple Andreev scattering at impurities creates the quasiparticle bound states. This impurity band affects the quasiparticle DOS, especially near the unitarity limit, as seen Fig. <ref> <cit.>. As the phase-shift of impurity scattering deviates from the unitarity, the energy of the individual bound states moves to a finite energy. For weak scatterers, the spectral weight for the bound states is eventually absorbed in the coherence peaks around ϵ≃± |Δ_ eq|. From the T-matrix equation in equilibrium and the zero-energy DOS, we find the relationship between the extrinsic ATHE and the DOS, κ^ ext_xy/N(ϵ_ F)v_ F^2 = -π^2Γ_ impγ^2T/12⟨α_1(k̂_z)|_⟩ FS^2/⟨α_2(k̂_z)|_⟩ FS∂ n_s(0)/∂δsin^2 δ + 𝒪(T^2,Γ_ imp^4). Equation (<ref>) is derived in Appendix <ref>. Equation (<ref>) shows that the ATHE is closely related to the scattering phase-shift dependence of the residual DOS, ∂ n_s(0)/∂δ, which is characterized by the emergence of the impurity bound states. As seen in Fig. <ref>(b), the zero-energy DOS rapidly grows in the intermediate scattering phase-shift. The variation of n_s(0) with δ slows down and eventually saturates in the unitarity limit. This evolution of the zero-energy DOS qualitatively agrees with the phase-shift dependence of the ATHE in WSCs. § LINE-NODES AND ANOMALOUS THERMAL HALL EFFECT §.§ Chiral p-wave state v.s. chiral f-wave state We are now in the position to compare the full numerical evaluation results for the thermal transport for the E_1u chiral p-wave and f-wave states, showing the influence of additional line-nodes on the extrinsic ATHE. As predicted by the low temperature expansion analysis, the extrinsic ATHE is small both for weak (Born) and strong (unitarity) scatterers and reaches the largest value for the intermediate phase-shift (see Fig. <ref>). The relative magnitude between the intrinsic and extrinsic ATHE is consistent with the result of the low temperature expansion analysis [see Figs. <ref>(a) and <ref>]. Assuming that the intrinsic contribution to the THC exhibits T-linear and unaltered at higher temperatures, the intrinsic mechanism should dominate over the extrinsic contribution at higher temperatures. However, in that range, we generally expect corrections to the result in Eq. (<ref>). Notably, the maximum amplitude of the THC in the chiral f-wave state with two horizontal line-nodes is smaller than that of the chiral p-wave state. This is in sharp contrast with the longitudinal thermal transport which is enhanced by the enlarged phase spaces available to quasiparticles near nodal lines. This result implies that the extrinsic ATHE is weakened by additional line nodes. This observation is consistent with the low temperature analysis discussed in Sec. <ref>. The function α_1(k̂_z) is proportional to η(k̂_z) [see Eqs. (<ref>) and (<ref>)] and changes sign across the additional line-nodes, as illustrated in Fig. <ref>. It is precisely that this sign change reduces the Fermi surface average of this function and suppresses the extrinsic anomalous thermal Hall response. §.§ Coexistence of the chiral p-wave and f-wave pairings Both the chiral p-wave and f-wave pairings we considered belong to the E_1u irreducible representation and hence these are naturally mixed. <cit.> We denote the chiral p (f)-wave order parameter as Δ_ p(f) and write the order parameter in the mixed pairing state as, d( k_ F)=Δ_ f(k̂_x+ik̂_y) (5k̂_z^2-1+Δ_ p/Δ_ f)ẑ. This order parameter in the mixed pairing state is written as Eq. (<ref>) with η(k̂_z)=5k̂_z^2-1+Δ_ p/Δ_ f. The nodal structure of the resulting gap depends on the ratio Δ_ p/Δ_ f. There are always Weyl nodes at the north and south poles of a spherical Fermi surface. In addition, the superconducting gap has two line-nodes when -4<Δ_ p/Δ_ f<1. These nodes merge at the equator for Δ_ p/Δ_ E_1u=1, and, for Δ_ p/Δ_ f>1, the equatorial line-node is lifted, and replaced by a minimum (see Fig. <ref> (b)). As seen in Fig. <ref> (b), the extrinsic ATHE is enhanced when the two horizontal line-nodes merge at the equator. When the mixed chiral state possesses a single line-node or small line-node gap minima at the equator, the extrinsic ATHE becomes dominant at low temperatures. When the line-node at the equator splits into two, the extrinsic ATHE is sharply suppressed. As shown in Fig. <ref> (c), the small gap minima also suppresses the extrinsic ATHE, but the suppression by the gap minima is more gradual. This behavior is also understood from the low temperature formula for the extrinsic ATHE (Eq. <ref>)). As shown in Fig. <ref>, when we introduce the chiral p-wave gap to the chiral f-wave state with 0<Δ_ p/Δ_ E_1u<1, two line-nodes get closer to each other. When two line-nodes approach each other, the positive region of α_1(k̂_z) is enlarged, and its momentum average is enhanced. As shown in Fig. <ref>, when two line-nodes merge at the equator, there is no sign change of the gap function and α_1(k̂_z) becomes a positive function. Absence of the sign change at line-nodes enhances the extrinsic ATHE since now the contribution of additional quasiparticles in the near-nodal regions (appearing in the denominator of α_1 in Eq. (<ref>)) is not reduced by the sign change in the numerator on performing the Fermi surface average. § CONCLUSION In this paper, we addressed the relative magnitude of the intrinsic and extrinsic ATHE in WSCs. The intrinsic ATHE arises from the existence of Weyl nodes and boundary modes and is determined by the structure of the order parameter and the distribution of Weyl nodes in the momentum space. The extrinsic contribution is due to the skew scattering of quasiparticles. We computed the extrinsic contribution to the THC using the quasiclassical Eilenberger formalism. The intrinsic ATHE is suppressed by the small factor, 1/(k_ Fξ_0), in superconductors, and thus the extrinsic contribution often dominates the thermal Hall responses. The intrinsic contribution relies on the gapless boundary modes and exhibits T-linear behavior regardless of the existence of nodal excitation in bulk <cit.>. Thus, the intrinsic contribution may dominate the thermal Hall response at low temperatures when the impurity potential is weak. Within this formalism, we combined the low temperature analysis and the numerical calculations to specifically focus on the effects of the impurity bands and the additional nodal structure on the extrinsic ATHE. One of our findings is that the variation of the density of states near the Fermi energy is crucial in determining the transverse transport coefficients. We find that impurities with the intermediate potential strength are more effective in generating the thermal Hall signal than those in the weak (Born) or strong (unitarity) scattering limit. We confirmed this by investigating the phase-shift dependence of the DOS and the ATHE, and associated the result with the growth of the impurity band from the individual impurity resonances. We showed that the additional line nodal structure significantly reduces the extrinsic contribution to the THC when line-nodes involve the sign change of the order parameter. For such order parameters, the angular momentum of the Cooper pairs is not the same as the winding number around the nodes. Consequently, the skew scattering from different parts of the Fermi surface partially compensates. In contrast, when the line nodes are not accompanied by such a sign change, quasiparticle excitations around the gap minima significantly enhance the thermal Hall response. Our results can be applied to any chiral superconductor even without Weyl nodes. For an example, we note the chiral d-wave pairing, Δ( k)∝k̂_z (k̂_x± i k̂_y) among candidate order parameters for Sr_2RuO_4 <cit.>. The chiral d-wave order parameter does not realize Weyl nodes because this compound has quasi-two-dimensional Fermi surfaces and the Fermi surface is absent at k_x=k_y=0. However, the chiral d-wave superconducting order involves the broken time-reversal and mirror reflection symmetries, allowing the ATHE <cit.>. Because the chiral d-wave order parameter realizes a line node at k_z=0 with the sign change, the line nodal excitations suppress the extrinsic ATHE. Hence, the intrinsic contribution will be a substantial part of ATHE in Sr_2RuO_4, but the detailed balance between it and the extrinsic contribution depends on the details of the Fermi surface and other material-specific parameters. Our findings provide clear evidence that in large classes of candidate materials, there is likely to be a competition between the intrinsic and extrinsic contributions to the ATHE even at the low temperature, and careful (and potentially material-specific) analysis is needed to understand its behavior in the superconducting state. Our work lays the framework and provides the blueprint for such analysis. § ACKNOWLEDGMENTS T. Matsuhita thank Y. Masaki, M. Sato, Y. Yanase, Y. Tanaka, and J. Ieda for fruitful discussions. T. Matsushita was supported by a Japan Society for the Promotion of Science (JSPS) Fellowship for Young Scientists and by JSPS KAKENHI Grant No. JP19J20144 and JST CREST Grant No. JPMJCR19T2. This work was also supported by JST CREST Grant No. JPMJCR19T5, Japan, and JSPS KAKENHI (Grants No. JP21H01039 and No. JP22H01221). I. V. was supported in part by grant NSF PHY-1748958 to the Kavli Institute for Theoretical Physics (KITP). § DERIVATION OF EQS. (<REF>),  (<REF>), AND (<REF>) In this appendix, we derive the linear response of anomalous Keldysh Green function, δǧ^a, to a temperature gradient. We start from the Eilenberger equation for the non-equilibrium Keldysh Green function, δǧ^ K, (M^ Rδg^ K-δg^ KM^ A)-(σ_ imp,eq0^ R-σ_ imp,eq0^ A)δg^ K +( σ_ imp,eq^ Kδg^ A- δg^ Rσ_ imp,eq^ K)-(δσ_ imp^ Rg_ eq^ K-g_ eq^ Kδσ_ imp^ A) -( δσ_ imp^ Kg_ eq^ A- g_ eq^ Rδσ_ imp^ K)+(i v_ F·∇ T)∂/∂ Tg_ eq^ K=0. With the anomalous Keldysh Green function, δg^a, and the anomalous Keldysh impurity self-energy, δσ_ imp^a, we recast Eq. (<ref>) into, (M^ Rδg^a-δg^aM^ A)-( σ_ imp,eq0^ R-σ_ imp,eq0^ A)δg^a +(g_ eq^ Rδσ^a- δσ^ag_ eq^ A) -i(ϵ v_ F·∇ T)/2T^2cosh^2 (ϵ/2T)( g_ eq^ R-g_ eq^ A)=0. The anomalous Keldysh impurity self-energy is calculated from the T-matrix equation, δσ^a_ imp= Γ_ imp(δ+⟨g_ eq^ R/π⟩_ FS)^-1 ×⟨δg^a/π⟩_ FS(δ+⟨g_ eq^ A/π⟩_ FS)^-1. The anomalous Keldysh Green function satisfies g_ eq^ Rδg^a+δg^ag_ eq^ A=0. Using this normalization, we can solve the transport equation, (<ref>), and obtain the anomalous Keldysh Green function, δg^a=δg^a_ ns+δg^a_ vc, δg^a_ ns=N^ R_ eq( g_ eq^ R-g_ eq^ A)(-i(ϵ v_ F·∇ T)/2T^2cosh^2(ϵ/2T)), δg^a_ vc= N^ R_ eq(g_ eq^ Rδσ_ imp^a- δσ_ imp^ag_ eq^ A), where N^ R_ eq=(D^ R+D^ A)(-g^ R_ eq/π)+σ_ imp,eq0^ R-σ_ imp,eq0^ A/(D^ R+D^ A)^2+( σ_ imp,eq0^ R-σ_ imp,eq0^ A)^2. § DERIVATION OF EQS. (<REF>) AND (<REF>) At low temperature, we expand the Green function in the energy ϵ, because we expect the energy range to be cut off by the temperature, while the Green function varies on the scale of the superconducting gap. As seen in the main text, the non-equilibrium retarded and advanced functions do not contribute to thermal transport because they arise from the temperature dependence of the equilibrium gap function. We thus focus on the anomalous Keldysh Green function. To the leading order in ϵ, the anomalous Keldysh Green function, Eqs. (<ref>)-(<ref>) reduce to, δg^a_ LT= δg^a_ ns,LT+δg^a_ vc,LT, δg^a_ ns,LT= -g^ R_ eq,LT/2π D_ LT( g_ eq,LT^ R-g_ eq,LT^ A) ×(-i(ϵ v_ F·∇ T)/2T^2cosh^2(ϵ/2T)), δg^a_ vc,LT= -g^ R_ eq,LT/2π D_ LT ×(g_ eq,LT^ Rδσ_ imp,LT^a- δσ_ imp,LT^ag_ eq,LT^ A). Here the subscript ‘‘LT" denotes the zero-frequency limit, ϵ=0. In that limit, the impurity self-energy σ_ imp,eq0 in equilibrium is purely real and the retarded and advanced values are identical. g^ R(A)_ eq,LT is given by, g^ R_ eq,LT=-πiγτ_z-Δ_ eq/D_ LT, g^ A_ eq,LT=-π-iγτ_z-Δ_ eq/D_ LT, with D_ LT≡ D^ R_ LT=D^ A_ LT=√(γ^2+|Δ_ eqk̂_ F⊥η(k̂_z)|^2). The self-energy σ_ imp,LT^ R=-iγτ_z in equilibrium is calculated from the T-matrix equation, γ=Γ_ impn_s(0)/^2δ +n_s^2(0), with the zero-energy DOS, n_s(0)=⟨γ/√(|Δ_ eqk̂_ F⊥η(k̂_z)|^2+γ^2)⟩_ FS. Here, we assume a spatially uniform temperature gradient along the y-direction and Δ_ eq∈ℝ. As in the manuscript, we consider the d-vector fixed along the z axis. In this case, the spin along the z axis is conserved, which allows us to focus on each spin subspace. Hence, we perform the low temperature expansion in each spin subspace and drop the spin index. In the zero-frequency limit, the anomalous Keldysh impurity self-energy is calculated from the self-consistent T-matrix equation, δσ_ imp,LT^a=Γ_ imp/(^2 δ+n_s^2(0))^2(δ +in_s(0)τ_z) ×⟨δg^a_ ns,LT+δg^a_ vc,LT/π⟩_ FS(δ -in_s(0)τ_z). The Fermi surface average of the non-selfconsistent contribution, δg^a_ ns, LT can be straightforwardly performed. From Eq. (<ref>), we obtain, Γ_ imp/(^2 δ+n_s^2(0))^2(δ +in_s(0)τ_z) ×⟨δg^a_ ns,LT/π⟩_ FS(δ -in_s(0)τ_z) =(Xτ_x+Yτ_y )⟨α_1(k̂_z)|_⟩ FS( -Γ_ impγϵ v_ FΔ_ eq(-∂_y T)/T^2cosh(ϵ/2T)). From the Eq. (<ref>), we make an ansatz for the anomalous Keldysh impurity self-energy, δσ_ imp,LT^a = (X̃τ_x+Ỹτ_y )⟨α_1(k̂_z)|_⟩ FS ×( -Γ_ impγϵ v_ FΔ_ eq(-∂_y T)/T^2cosh(ϵ/2T)). With this anomalous Keldysh impurity self-energy, we transform the T-matrix equation (<ref>) into, [ 1-2Γ_ imp|Δ_ eq|^2Y⟨α_2(k̂_z)|_⟩ FS -2Γ_ imp|Δ_ eq|^2X⟨α_2(k̂_z)|_⟩ FS; 2Γ_ imp|Δ_ eq|^2X⟨α_2(k̂_z)|_⟩ FS 1-2Γ_ imp|Δ_ eq|^2Y⟨α_2(k̂_z)|_⟩ FS ][ X̃; Ỹ ] = [ X; Y ]. From the matrix equation (<ref>), we obtain the coefficient for the anomalous Keldysh impurity self-energy, X̃, Ỹ, X̃ =X/ Det, Ỹ =Y/ Det-Γ_ imp|Δ_ eq|^2⟨α_2(k̂_z)|_⟩ FS/8 Det(^2 δ+n_s^2(0))^2, where Det represents the determinant of the matrix in Eq. (<ref>), Det= 1-Γ_ imp|Δ_ eq|^2(^2 δ-n_s^2(0))/(^2 δ+n_s^2(0))^2⟨α_2(k̂_z)|_⟩ FS +Γ_ imp^2|Δ_ eq|^4/4(^2 δ+n_s^2(0))^2⟨α_2(k̂_z)|_⟩ FS^2. We now obtain the low temperature formula for thermal conductivities, κ^ ext_yy/N(ϵ_ F)v_ F^2 ≃ π^2 T/6γ^2 ⟨α_0(k̂_z)|_⟩ FS +π^2 Γ_ impγ^2|Δ_ eq|^2T/3Ỹ⟨α_1(k̂_z)|_⟩ FS^2+𝒪(T^2), κ^ ext_xy/N(ϵ_ F)v_ F^2 ≃ -π^2 Γ_ impγ^2|Δ_ eq|^2T/3X̃⟨α_1(k̂_z)|_⟩ FS^2+𝒪(T^2). In the clean system Γ_ imp≪π T_c, the low temperature formula for the thermal conductivity reduces to Eqs. (<ref>) and (<ref>), κ^ ext_yy/N(ϵ_ F)v_ F^2 ≃ π^2 T/6γ^2 ⟨α_0(k̂_z)|_⟩ FS +π^2 Γ_ impγ^2|Δ_ eq|^2T/3 Y⟨α_1(k̂_z)|_⟩ FS^2 +𝒪(T^2,Γ_ imp^4), κ^ ext_xy/N(ϵ_ F)v_ F^2 ≃ -π^2 Γ_ impγ^2|Δ_ eq|^2T/3 X ⟨α_1(k̂_z)|_⟩ FS^2 +𝒪(T^2,Γ_ imp^4). § DERIVATION OF EQ. (<REF>) We give the derivation of Eq. (<ref>) to clarify the relation between the extrinsic ATHE and the formation of the impurity band. To associate these, we consider the equilibrium T-matrix equation and the zero-energy DOS. Differentiating Eqs. (<ref>) and (<ref>) with the scattering phase-shift, we obtain ∂γ/∂ (δ)= 4(Y∂ n_s(0)/∂ (δ)-X), ∂ n_s(0)/∂ (δ)=|Δ_ eq|^2⟨α_2(k̂_z)|_⟩ FS∂γ/∂ (δ), From Eqs. (<ref>) and (<ref>), we obtain ∂ n_s(0)/∂δsin^2 δ= 4Γ_ imp|Δ_ eq|^2⟨α_1(k̂_z)|_⟩ FSX +𝒪(Γ_ imp^2). Comparing Eq. (<ref>) to Eq. (<ref>), we find κ^ ext_xy/N(ϵ_ F)v_ F^2 = -π^2Γ_ impγ^2T/12⟨α_1(k̂_z)|_⟩ FS^2/⟨α_2(k̂_z)|_⟩ FS∂ n_s(0)/∂δsin^2 δ +𝒪(T^2,Γ_ imp^4). apsrev
http://arxiv.org/abs/2405.09115v1
20240515061939
Hybrid Meta-Solving for Practical Quantum Computing
[ "Domenik Eichhorn", "Maximilian Schweikart", "Nick Poser", "Frederik Fiand", "Benedikt Poggel", "Jeanette Miriam Lorenz" ]
quant-ph
[ "quant-ph", "cs.SE" ]
HAAP: Vision-context Hierarchical Attention Autoregressive with Adaptive Permutation for Scene Text Recognition Honghui Chen, Yuhang Qiu, Jiabao Wang, Pingping Chen, Senior Member, IEEE, and Nam Ling, Life Fellow, IEEE Honghui Chen, Jiabao Wang, Pingping Chen are with the College of Physics and Information Engineering, Fuzhou University, Fuzhou, 350108, China (e-mail: chh5840996@gmail.com; wabbb0811@163.com; ppchen.xm@gmail.com). Yuhang Qiu is with the Faculty of Engineering, Monash University, Clayton, VIC, 3800, Australia (e-mail: yuhang.qiu@monash.edu). Nam Ling is with the Department of Computer Science and Engineering, Santa Clara University, Santa Clara, California, 95053, USA (e-mail: nling@scu.edu). Manuscript received January 31, 2024. ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== specialfooter The advent of quantum algorithms has initiated a discourse on the potential for quantum speedups for optimization problems. However, several factors still hinder a practical realization of the potential benefits. These include the lack of advanced, error-free quantum hardware, the absence of accessible software stacks for seamless integration and interaction, and the lack of methods that allow us to leverage the theoretical advantages to real-world use cases. This paper works towards the creation of an accessible hybrid software stack for solving optimization problems, aiming to create a fundamental platform that can utilize quantum technologies to enhance the solving process. We introduce a novel approach that we call Hybrid Meta-Solving, which combines classical and quantum optimization techniques to create customizable and extensible hybrid solvers. We decompose mathematical problems into multiple sub-problems that can be solved by classical or quantum solvers, and propose techniques to semi-automatically build the best solver for a given problem. Implemented in our ProvideQ toolbox prototype, Meta-Solving provides interactive workflows for accessing quantum computing capabilities. Our evaluation demonstrates the applicability of Meta-Solving in industrial use cases. It shows that we can reuse state-of-the-art classical algorithms and extend them with quantum computing techniques. Our approach is designed to be at least as efficient as state-of-the-art classical techniques, while having the potential to outperform them if future advances in the quantum domain are made. § INTRODUCTION Quantum algorithms have demonstrated theoretical advantages over their classical counterparts in addressing problems such as unstructured database search <cit.>, factorization <cit.>, and testing whether a function is constant <cit.>. These theoretical advantages have prompted a discussion of potential applications of quantum technologies to the optimization domain, with the objective of retrieving practical quantum speedups and creating more efficient solvers. Nevertheless, we are currently in the early stages of quantum computing, where the practical quantum advantages for optimization problems have yet to be realized <cit.>. There are several factors currently preventing us from achieving quantum supremacy, a major one being the availability of scalable quantum hardware with large numbers of qubits with high connectivity and efficient error correction. However, the availability of advanced quantum hardware does not guarantee the development of superior solvers. We have yet to identify methods for translating the theoretical speedups into practical applications. To this end, we must create software stacks that facilitate the integration of quantum solutions into broader computational pipelines, where they can operate in conjunction with classical computers in an efficient and effective manner. Moreover, it is necessary to investigate how the theoretical advantages of quantum computing can be applied in actual computational pipelines, where information between classical and quantum computers must be transferred continuously. Currently, encoding information on quantum computers requires extensive transformation techniques. For instance, when creating oracles to apply Grover's algorithm <cit.>, or when transforming constrained algorithms into Quadratic Unconstrained Binary Optimization (QUBO) Problems to apply quantum approximation algorithms <cit.>. Next, quantum computing must be made more accessible to the general public. Vendors of quantum solutions require their users to utilize their frameworks in a manner that is opaque to the user, limiting their ability to adapt the framework to diverse real-world problems <cit.>. In other instances, users are provided with only basic programming kits and frameworks, such as Qiskit <cit.>, Pennylane <cit.>, or Qrisp <cit.>. These frameworks require users to possess advanced expertise and to implement the core functionality themselves. Both of these options are suboptimal, as users should not be forced to identify opportunities and implement quantum applications themselves. Rather, they should have the ability to customize quantum application pipelines to optimize their performance and meet their custom needs. Ultimately, an abstraction layer that covers both the classical and quantum parts of the computation is needed. This paper works towards the creation of an accessible hybrid software stack for solving optimization problems, aiming to create a fundamental platform that can utilize quantum technologies to enhance the solving process. We introduce a concept called Hybrid Meta-Solving, which combines the advantages of classical and quantum optimization in hybrid solution strategies to create new, powerful ways to solve well-known mathematical problems. Meta-Solving describes the decomposition of a mathematical problem into multiple sub-problems, each of which can be solved by a selection of solvers. Using expert knowledge, empirical data, and established heuristics, we can compare potential classical and quantum solvers for a subroutine and find the best solver for the given problem. This paper outlines the fundamental concepts of Meta-Solving and illustrates how these concepts can be utilized to create interactive, semi-automated workflows. We explain how users can utilize those workflows to exploit the potential of quantum computing and find efficient solutions for given algorithmic problems. A first prototype implementing the fundamentals of Meta-Solving is available in our ProvideQ toolbox <cit.>. Our evaluation demonstrates that our Meta-Solving concept is applicable to realistic problems and reaches at least the same performance as classical state-of-the-art approaches. While we are not yet able to reach actual quantum speedups, we show how a fundamental platform that integrates classical and quantum techniques can be created. § BACKGROUND AND RELATED WORK This section briefly introduces the background to quantum computing and presents state-of-the-art approaches and existing work related to our Meta-Solving concept. §.§ Current-era Quantum Computing Today we are in what is known as the Noisy Intermediate-Scale Quantum (NISQ) <cit.> era. Medium-scale quantum computers with a few hundred qubits are available and can be programmed using a gate-based programming model. However, the hardware is still noisy, and it requires expensive error-mitigation measures to produce reasonable results even for very small problems <cit.>. A plethora of quantum algorithms are currently being studied on small, error-prone quantum computers or simulators, as well as through theoretical means. Algorithms such as Grover <cit.>, Shor <cit.>, and Deutsch-Jozsa <cit.> were designed even before the first quantum computers became available. These algorithms provide theoretically proven advantages, but we are currently unable to leverage them in practice due to a number of factors, including the fact that they were designed for fault-tolerant quantum computers, which are not yet available. To make quantum computing viable in the near future, NISQ-tailored quantum algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) <cit.> and the Variational Quantum Eigensolver (VQE) <cit.> have been developed. However, it has not yet been demonstrated that the NISQ-tailored algorithms can provide actual speedups. §.§ Quantum Computing Platforms for Optimization With the constant improvement of the capacities of actual quantum hardware and the new possibilities for algorithms executed on it, various endeavours have started working on an abstraction layer that relieves the end user from deciding between the numerous options. Formally, one can integrate these options in a modular decision tree with a set of options then forming a so-called Solution Path, and recommend Solution Paths based on various metrics and characteristics of the application <cit.>. Finding and evaluating good Solution Paths for application problems like vehicle routing is hard, however, and requires extensive domain knowledge along with hardware improvements and computational tests <cit.>. Hybrid solvers are in development, e.g., by Quantagonia <cit.> or D-Wave <cit.>, though focusing mostly on annealing methods for now due to their farther maturity. A thorough description and benchmark is found in <cit.>. With the PlanQK platform <cit.>, a first hardware-agnostic platform and vision on how end users can approach solving various application cases with quantum-enhanced algorithms exists. The efforts for abstraction go beyond optimization and similarly extend to Quantum Machine Learning <cit.>. Our work focuses on the integration of quantum computing and its existing platforms into classical optimization techniques. We combine the best of both worlds with the goal of building a new platform that gives users the tools they need to take advantage of quantum computing and build efficient hybrid solvers tailored to their needs. §.§ Classical Optimization and Polylithic Modeling In the domain of classical mathematical optimization, specialized solvers have been developed to address particular problems with high efficiency, exemplified by the TSP solver Concorde <cit.>. Additionally, optimization solvers that cater to broader problem classes, such as Linear Programming (LP), Mixed-Integer Linear Programming (MILP), Nonlinear Programming (NLP), and Mixed-Integer Nonlinear Programming (MINLP), exhibit considerable computational power. These solvers have undergone continuous refinement resulting in remarkable speedup over several decades <cit.> and are presently employed to tackle complex, real-world challenges. However, as solvers become more efficient, users strive to build more accurate models, thereby increasing their complexity. Despite the advancements in optimization solver technology, a number of practical optimization challenges remain that are not adequately addressed by current state-of-the-art solutions. In response to this gap, methodologies that decompose a complex, "monolithic" problem into a series of simpler, more manageable subproblems have gained prominence. Termed "polylithic" modeling and solution approaches, this strategy entails the development of customized methods that incorporate multiple models and/or algorithmic components <cit.>. Here, the solution derived from one model serves as the input for another. Notable examples of such polylithic approaches include decomposition techniques (e.g. Benders <cit.> and Dantzig-Wolfe <cit.> decomposition), advanced MILP and MINLP solvers that integrate presolve strategies with the sequential resolution of subproblems (frequently employing various external sub-solvers) within a Branch and Cut framework, and hybrid methods that integrate constructive heuristics and local search improvement strategies with exact Mathematical Programming algorithms. While polylithic modeling is not inherently dependent on any specific software, algebraic modeling languages such as the General Algebraic Modeling System (GAMS), have demonstrated significant utility in facilitating the implementation of these sophisticated approaches <cit.>. In our work, we draw inspiration from well-established polylithic approaches in classical optimization and extend them with quantum computing techniques. We actively reuse openly available state-of-the-art solvers and decompositions to maximize the efficiency of our approach. Furthermore, we bundle the existing state-of-the-art into a user-friendly toolbox to enable easy reusability, and extensibility. § THE META-SOLVING CONCEPT Following, we explain the general idea and core concepts of Meta-Solving. Meta-Solving combines classical optimization principles with quantum computing algorithms to create new, powerful solvers. We facilitate the access to quantum computing solutions by introducing a concept that reuses the advantages of classical polylithic approaches and extends them with quantum algorithms and a user-friendly framework. Meta-Solving combines three core concepts: the design of Meta-Solver Strategies, the reuse and implementation of highly efficient solvers for Meta-Solver Steps, and the semi-automatic selection of customizable Solution Paths. definition definitionDefinition[section] A Meta-Solver Strategy is a decomposition of a mathematical problem into multiple sub-problems, each addressed with tailored algorithms, culminating in a comprehensive solution to the original problem. It can describe multiple interchangeable combinations of sub-problems and algorithms, meaning that it can define multiple different methods to solve the problem. A Meta-Solver Step is a component of a Meta-Solver Strategy, typically associated with a single sub-problem. It isolates a specific aspect of a complex mathematical problem for which a dedicated algorithm can be designed and applied to address this segment. Each step functions as a building block, contributing a partial solution that, when integrated with others, contributes to a complete solution to the overarching problem. A Solution Path represents one specific method to solve a mathematical problem derived from a Meta-Solver Strategy. Each Solution Path is a combinations of Meta-Solver Steps that are consecutively executed. A Meta-Solver Strategy can be visualized as a tree, where the root node describes the associated mathematical problem, and all other nodes represent the Meta-Solver Steps forming the solving process. Nodes that have no children represent the final step of a decomposition, meaning that after executing them, a result for the algorithmic problem can be constructed by recombining the results of the executed steps. A Solution Path can be visualized as a path of the tree, starting at the root node, and ending in a leaf. The structure of the tree, given by the edges between the nodes, describes which Meta-Solver Steps can and must be combined to find a solution. An example of a Meta-Solver Strategy for Vehicle Routing Problems (VRP) is shown in Figure <ref>. The Meta-Solver Strategy decomposes the problem into two main steps: (1) applying a clustering technique, and (2) solving the clusters. Depending on the clustering technique used, the clusters are either a set of Traveling Salesperson Problem (TSP) or VRP instances. The decomposition shown is inspired by classical state-of-the-art techniques: VRPs are usually solved by specialized solvers, such as the Lin-Kernighan-Helsgaun (LKH-3) solver <cit.>. If a problem instance is too large to be solved in feasible time, clustering techniques are used to split the problem into several smaller sub-problems, which is faster but may reduce the quality of the solution <cit.>. We combine classical and quantum optimization by adding the ability to solve TSP instances by reformulating them as a Quadratic Unconstrained Binary Optimization (QUBO) problem <cit.>, which can then be solved with a Quantum Annealer <cit.> or QAOA <cit.>. Another option is solving the TSP instances directly with a phase estimation technique proposed by Srinivasan et. al. <cit.>. Implementations of the QAOA and phase estimation techniques are available in Qrisp <cit.>, for the Annealing we utilize the D-Wave platform <cit.>. All other steps of the Meta-Solver Strategy are classical. LKH-3 <cit.> is a solver applicable to TSP and VRP problems. For the VRP clustering we use the k-Means method <cit.>. LKH-3 and k-Means are well established in classical optimization. For the TSP clustering, we choose a specific 2-phase clustering approach by Laporte and Semet <cit.>, which has already been studied for hybrid TSP solving <cit.>. It consists of a creation phase and an improvement phase, and must ensure that each TSP instance can be covered by one truck. The concept of Meta-Solving is introduced to the user through the provision of Meta-Solver Strategies, exemplified by the strategy depicted in Figure <ref>. In addition, the user is provided with a software tool that implements the strategies and enables the user to engage with them. The user should be able to input an algorithmic problem using a standardized format and then semi-automatically select Solution Paths to solve it. The software tool can assist the user in selecting a Solution Path, for example by providing the user with expert knowledge about solvers, or by analysing the user's input and suggesting a suitable solver based on established heuristics. We show an example interaction of a user that applies Meta-Solving in Figure <ref>. Experienced users and researchers may wish to customize a Meta-Solver Strategy or heuristic, or extend a strategy by adding new steps and solvers. The following sections explain the core concepts and challenges of a Meta-Solving software tool, and how we envision its application. §.§ Designing and Implementing Meta-Solver Strategies The design and implementation of efficient Meta-Solver Strategies is a key necessity to facilitate the access to hybrid solution strategies for end users. However, finding decompositions that can utilize quantum steps and implementing respective solvers such that they are able to compete with classical state-of-the-art techniques is a challenging task that requires expert knowledge from various fields. To create a new Meta-Solver Strategy, an expert must first find a mathematically reasonable decomposition. A good practice here is to take inspiration from well-established classical solvers. These are usually well researched and optimized over years, and many of them are open source or free to use for academic purposes. They are often benchmarked against each other, and there is knowledge of cases where they perform exceptionally well or bad. Our goal is to extend classical optimization techniques with quantum algorithms, so we want to find steps in the classical algorithms that we can exchange with quantum algorithms. In classical optimization, there are usually many approaches to solving a problem, each with different advantages and disadvantages, and many classical approaches involve several steps where quantum computing could potentially be applied. Moreover, the possible applications of quantum computing are likely to increase with ongoing research, especially when more modular and open-source optimization frameworks become available in the future. With our Meta-Solving approach we want to actively encourage the implementation and comparison of multiple solvers and Meta-Solver Steps. We want to promote their unique advantages and enable users to benefit from them. Thus, basing Meta-Solver Strategies on the combination of several classical state-of-the-art approaches is a good starting point. The implementation of a Meta-Solver Strategy is built from two parts: 1) solvers for the Meta-Solver Steps, and 2) an orchestration unit that can handles the decomposition of the problem. To implement the solvers, existing highly efficient classical solvers should be used wherever possible. In our Vehicle Routing Example we reuse the LKH-3 <cit.> solver. Wrappers will then provide compliance with the necessary input and output formats. Although the quantum steps are not as advanced, the same concept applies for them. Currently, there are frameworks like Qiskit-Optimization <cit.> or Qrisp <cit.> that provide implementations for well-known quantum algorithms, which should be taken advantage of. For the orchestration unit, we have to create an interface where an algorithmic problem and additional parameters, such as the Solution Path, can be inserted. We use standardized formats to define the algorithmic problem, which increases compatibility with existing tools and solvers, allowing for a more modular setup of the pipeline. An example format for Vehicle Routing Problems is the TSPLIB format <cit.>. The orchestration unit decomposes the original problem into multiple sub-problems, each of which is solved by a solver selected in the Solution Path. To support solver-specific input formats, additional parsing steps can be inserted. The orchestration unit must interpret the solution of each solver, which provides a set of intermediate results that are then used to construct a result for the original algorithmic problem. Figure <ref> provides an overview of the orchestration unit. §.§ Generalized and Specialized Meta-Solving Strategies Algorithmic problems exhibit varying degrees of generality, ranging from general mathematical formulations such as Quadratic Unconstrained Binary Optimization (QUBOs) or Integer Linear Programming (ILP) to more specific problems like vehicle routing or the knapsack problem. Similarly, Meta-Solver Strategies can be categorized into general strategies that cover a broad spectrum of problems or specialized strategies tailored for specific problem domains. Creating a Meta-Solver Strategy necessitates considerable effort, and there might not be a readily available strategy for every problem type. Consequently, having general strategies, such as one designed for Integer Linear Programming, can effectively address a wide array of problems. However, specialized solvers can be customized and optimized for a particular problem, enabling performance enhancements tailored to the problem's unique characteristics. In such cases, highly specific heuristics may result in superior performance. For instance, consider the Vehicle Routing Problem again. It can be tackled using a dedicated VRP solver or by reformulating it into a more general problem representation like an ILP, subsequently employing an ILP solver. While the specialized VRP solver is likely to deliver better performance, the ILP solver facilitates easier extension of the problem with additional constraints beyond the scope of VRP definitions. In summary, the choice between generalized and specialized Meta-Solver Strategies involves a trade-off between versatility and performance optimization. General strategies are more broadly applicable, but may perform worse than specialized ones. Consequently, the selection of an appropriate strategy depends on the specific problem requirements and the balance between generality and performance. Generalized Meta-Solver Strategies are built similarly to specialized ones. We have shown an example of a specialized strategy in Figure <ref>, which covers VRP solving. An example of a general strategy covering ILP solving is shown in Figure <ref>. The ILP solution strategy is kept simple, consisting of a classical solution path that does not decompose the problem further, and a hybrid solution path that combines the Branch and Bound <cit.> and Simplex <cit.> techniques. For both, the application of quantum algorithms has been studied, so an implementation of a quantum branch and bound method <cit.>, combined with a hybrid Simplex <cit.> is possible and could provide speedups on very advanced quantum hardware. §.§ Input Selection and Interpretation Meta-Solver Strategies offer users the possibility to create a variety of solvers. However, selecting the most suitable Solution Path entails careful consideration of various factors. To assist users in navigating this decision-making process, it is essential that they provide input reflecting their preferences and problem characteristics. Users must weigh their priorities, such as solution quality and computational efficiency, when deciding which path to pursue. For example, obtaining optimal or near-optimal solutions often requires substantial computational resources, resulting in high costs. However, faster solutions may compromise optimality. To assist users in navigating this decision-making process, it is essential that they provide input reflecting their preferences and problem characteristics. A fundamental input parameter is the algorithmic problem itself. By analyzing the problem's syntax, semantics, and characteristics such as size and complexity, we can tailor our approach accordingly. Some characteristics, like problem size, readily inform decisions about computational resources, such as whether a problem can be encoded on a quantum computer or whether clustering techniques may be advantageous. However, other problem-specific criteria may require a more complex analysis. As an example, consider the solving of Linear Programs (LPs), which can be solved in many different ways, for instance, by using a Simplex <cit.> or Interior-Point <cit.> algorithm. The choice between these algorithms depends on factors such as problem size and density. For instance, the mentioned Interior-Point algorithm is typically better suited for large, sparsely populated problem instances, whereas the Simplex algorithm works better on small, densely populated instances. In this case, the superior algorithm can be inferred from the population characteristics of the LP. Beyond problem-specific traits, user preferences play an important role. Providing users with customizable parameters, such as sliders to indicate preferences for speed versus optimality, or information about available hardware resources, enables even more tailored solution recommendations. However, it is crucial to balance the detail of our analysis with computational efficiency. The analysis should be conducted efficiently, ensuring that the time spent evaluating Solution Paths does not overshadow the time required to solve the problem itself. In summary, facilitating user input and preferences and the analysis of problem characteristics are key components of guiding users towards finding effective Solution Paths. By providing users with intuitive interfaces to submit problems and express solution expectations, coupled with efficient analysis techniques, we build a basis that allows us to offer tailored recommendations that optimize the Meta-Solving experience. §.§ Suggesting Solution Paths Our aim is to empower users to semi-automatically navigate Solution Paths within Meta-Solver Strategies by offering tailored suggestions. However, the task of selecting the most optimal solver for a given problem is inherently challenging, even within classical optimization domains. The addition of quantum algorithms further complicates this decision-making process. Consequently, we opt to suggest Solution Paths instead of making fully automated selections. Given the difficulty of a semi-automated solver selection, we propose to employ diverse heuristics and strategies to facilitate user-guided selection. Central to this approach is the input provided by the user, as discussed in sub-section <ref>. Our proposed method involves a step-wise suggestion and selection technique, wherein users are presented with Meta-Solver Strategies and proceed to select Solution Paths incrementally. At each step, suggestions from the software toolbox guide the user's decision-making process. Depending on the confidence level of our suggestions, we may present them in various formats. Highly certain suggestions may be presented by clearly highlighting the next recommended step, while less definitive suggestions could be accompanied by a list of pros and cons that the user has to interpret himself. A significant challenge in suggesting Solution Paths lies in the uncertainty inherent to such recommendations. While leveraging heuristics can help in providing informed suggestions, their availability is not guaranteed. Consequently, we explore ways for developing new heuristics or employing alternative techniques to assist users in the decision-making process. One approach it to embedd expert knowledge within Meta-Solver Strategies. Each implemented solver would be annotated with detailed information, allowing users to gain insight into the particularities of each solver. Access to such distinguished knowledge empowers users to make informed decisions, especially in the absence of readily applicable heuristics. A disadvantage of this approach is that detailed insights are usually not available for closed-source solvers. Additionally, we consider the application of machine learning techniques to derive new heuristics for implemented solvers. This involves constructing benchmarking sets that include realistic real-world problem instances and artificially generated challenging instances. By evaluating all solvers and potential Solution Paths against these benchmarks, we can train machine learning models which can then be applied to offer recommendations. Similar to the analysis techniques proposed earlier, we must be careful that the computational cost of machine learning techniques does not outweigh the benefits they provide. In conclusion, our approach to facilitating semi-automatic selection of solution paths in Meta-Solver Systems involves a combination of user input, heuristic guidance, expert knowledge embedding, and machine learning techniques. By leveraging these strategies, we endeavor to empower users to navigate solution paths effectively within complex optimization domains. §.§ Parallel Execution In certain scenarios, suggesting Solution Paths within Meta-Solver Strategies may not be feasible. This limitation arises when no heuristic is available or when existing heuristics fail to determine a clear preference. To address this challenge, various approaches can be employed. While randomly selecting a Solution Path is one option, it carries the risk of yielding suboptimal results. We propose a more effective strategy that involves executing multiple Solution Paths simultaneously and subsequently comparing their outcomes. This concept requires Meta-Solving software tools to support parallel execution of multiple Meta-Solver Steps, enabling users with a visualization of multiple results that can be compared. Now, a new challenge emerges: determining the best solution among those presented. Although implementing assessment techniques to grade solution quality is possible, it remains a challenge to determine a result that best fits the users needs in a general case. Thus, we again refrain from automating this step and instead enable users to make their own informed decisions. We assist users in this process by providing comprehensive visualizations and combine them with assessment metrics. Furthermore, we have to consider that some solvers, particularly quantum ones, are non-deterministic. Consequently, they may produce different solutions when executed multiple times. To accommodate this variability, our approach incorporates the option to apply multiple trials for both quantum and classical solvers. It is important to state that the parallel execution and comparison technique entails a notable drawback: it significantly increases computational effort. Users must therefore make informed decisions and carefully consider scenarios where parallel execution and result comparison are warranted. The combination of parallel execution and heuristic guidance offers a promising framework for identifying optimal Solution Paths. This approach not only facilitates the creation of highly efficient solvers but also enables the utilization of quantum algorithms, thereby enhancing the Meta-Solver Strategies' effectiveness. §.§ Backend Selection Upon selecting a Solution Path within Meta-Solver Strategies, the subsequent execution of included Meta-Solver Steps necessitates compatibility with appropriate backends. For classical algorithms, these backends typically encompass GPU or CPU computing clusters, whereas for quantum algorithms, options include simulators, quantum annealers, or universal quantum computers of various hardware technologies. Notably, while quantum simulators serve well for testing and learning purposes, they fall short in achieving actual speedups. The choice of backend holds significant implications for computation efficiency and solution quality. To assist users in this process, a semi-automated backend selection should be supported within Meta-Solving. An application of such an approach first requires users to provide information about the backends they have access to. Accessing classical computation backends is generally straightforward, as individuals can book computation time, and institutions such as companies or universities typically possess readily available resources. However, access to quantum computers remains more limited, For classical algorithms, determining the essential computational resources for a solver (e.g., memory, GPU/CPU power, cores) is also straightforward. This information is typically gained when executing the solver on sample sets. By annotating this information for each implemented classical solver, we can propose a fitting classical computation cluster accordingly. In contrast, quantum backend selection is notably more intricate due to the distinctive characteristics of quantum hardware. Factors such as varying technologies for qubit representation, programming methods, unique qubit mappings, error rates, and optimization requirements for specific low-level hardware pose significant challenges. Exact error rates even vary over time due to calibration uncertainty. Consequently, selecting a well-suited quantum backend assumes paramount importance. This complexity is addressed by sophisticated quantum backend selection frameworks such as the MQT Predictor <cit.> or the NISQ analyzer <cit.>, which can be integrated into the orchestration units of a Meta-Solving software framework. By leveraging prior work in this field, we can develop robust algorithms and frameworks to guide users in selecting appropriate quantum backends, thereby enhancing the efficacy of Meta-Solver Strategies in quantum and classical computing domains. §.§ Composing Meta-Solver Strategies Algorithmic problems and Meta-Solver Strategies operate at different levels of abstraction and generality. For instance, sorting problems represent low-level, general challenges, whereas Vehicle Routing embodies a specialized, high-level problem. Low-level algorithms often solve sub-problems of high-level problems, prompting the reuse of low-level Meta-Solver Strategies within higher-level ones. This technique, which we call Composing Meta-Solver Strategies, facilitates the integration of existing strategies as Meta-Solver Steps in more complex strategies. As an example, consider a Meta-Solver Strategy designed to solve QUBO problems. This strategy can be repurposed to tackle vehicle routing problems and integer linear programs. For instance, in the VRP Meta-Solver Strategy from Figure <ref>, instead of defining a customized QUBO solving step, we can simply call the existing QUBO strategy and continue the solution process. Similarly, in the ILP strategy from Figure <ref>, which currently lacks a QUBO solving step, we could extend it by adding a step to convert the ILP into a QUBO <cit.>. Composing Meta-Solver Strategies offers substantial time-saving benefits by leveraging existing implementations and heuristics. However, there are scenarios where reusing a strategy may not be advantageous. For example, one approach to solving QUBOs involves the Variational Quantum Eigensolver (VQE) <cit.>, whose performance depends heavily on finding a suitable ansatz, which can be highly problem specific. Implementing a general-purpose VQE and reusing it across different QUBOs can lead to performance degradation. In this example, a user needs to be guided in two decisions. First, he must decide whether reformulating the problem as a QUBO is beneficial to the solution process, and if so, he must decide which QUBO solving method is best to solve his specific problem. In conclusion, it is crucial to strike a balance between when reusing general implementations proves beneficial and when it is detrimental. By carefully evaluating the specific characteristics of the problem domain and the performance implications of reuse, practitioners can effectively leverage Capsuling of Meta-Solver Strategies to optimize solution processes. § IMPLEMENTATION In this section, we present how our ProvideQ toolbox prototype <cit.> implements the concepts set out in Section <ref>. This implementation allows us to evaluate the concept in Section <ref> and enables users to explore the Meta-Solver Strategy. The ProvideQ toolbox is structured with a client-server architecture where the server is responsible for data retention and solver execution and the client serves as a user interface. This separation, in combination with a well-documented REST API, also ensures that the toolbox can be controlled automatically and by other applications. The data structures and processes within the ProvideQ toolbox closely resemble the key definitions presented in Section <ref>. Both the root problem and every step of a Meta-Solver Strategy are represented as s, a tree-like recursive data structure. Starting with an empty root problem node, the toolbox automatically expands the problem tree with problem branches corresponding to the tasks required for executing a selected solver. This recursive data structure enables the toolbox to solve steps independently and in parallel. The design of the toolbox is centered around the definition and implementation of problem solvers. Problem solvers need to implement a simple interface which provides them with input data and requires them to return output data. Meta-Solver Strategies and orchestration units are implemented as separate layers on top of the problem solver layer. This separation ensures that developers implementing problem solvers do not need to know about the details of Meta-Solver Strategies. Furthermore, the toolbox library provides various utility modules assisting problem solver implementations to integrate external tools and languages. For example, the ProvideQ toolbox supports the usage of the Qiskit and Qrisp frameworks by providing a for running Python scripts, writing input files, and reading output files. These frameworks, in turn, can be used to execute code on external hardware, like running quantum circuits on quantum computers. Additionally, the toolbox server can read and write problem instances in well-known, standardized formats like the TSPLIB format for Vehicle Routing Problems <cit.>, or the DIMACS format for Boolean Satisfiability (SAT) problems <cit.>. This ensures the toolbox can be easily integrated with existing solvers, both when using the toolbox in an external application and when integrating existing solvers into to toolbox. The toolbox client presents the features of the server in a web application. This component utilizes the server's API to visualize Meta-Solver Strategies, enabling users to submit problems, interactively select Solution Paths, and compute solutions. The user is guided through the problem-solving process with Solution Path suggestions. The problem-solving workflow involves three steps in the ProvideQ web interface, as shown in Figure <ref>. First, the user chooses a specific problem to solve from the list of available problems. Currently available problems in the prototype include VRP, QUBO, SAT and MaxCut. Second, the toolbox prompts the user to enter the problem instance in a standardized text format into the input field. Additionally, the Meta-Solver Strategy of the selected problem is visualized as a tree. Here, each node represents a Meta-Solver Step, and the user needs to select solvers consecutively to build a Solution Path for the Meta-Solver Strategy. Third, the user can inspect the current state, results, and solver specific settings of each step in separate views. In some cases, Meta-Solver Steps require specific settings, for example, the desired number of clusters in a VRP clustering step. The user can choose between a stepwise or complete execution of the Solution Path. When partially solving a Solution Path, inspecting specific Meta-Solver Step results can guide the decision process to select the next step. For example, the current clustering of a VRP problem might still be too large to be applicable for a Quantum solver and based on this insight another clustering step is inserted into the Solution Path. At this point in time, the ProvideQ toolbox is a prototype for Hybrid Meta-Solving. It is developed as an open-source project on GitHub[GitHub repository: <https://github.com/ProvideQ>] and currently implements problem definitions, Meta-Solver decomposition, Composing of strategies, and the orchestration unit. Additionally, the interactive tree-like user interface for Solution Paths is currently in development. The open-source model of the toolbox allows users to incorporate new problems and solvers and share them with others at their own discretion. This way, experienced users can host their own instances of the toolbox-server to leverage their computational resources and to customize the toolbox depending on their individual requirements. Many more ideas are yet to be implemented, for example the semi-automated Solution Path and backend suggestions, and the sophisticated parallel execution of Solution Paths, as described in Section <ref>. As the ProvideQ toolbox is designed with these extensions in mind, we leave these tasks to be implemented in future work. § EVALUATION We introduced a new technique for solving optimization problems: Hybrid Meta-Solving. We want to evaluate if our technique is implementable and if it is able to fully exploit state-of-the-art classical techniques. Furthermore, we want to study how the introduction of quantum steps in a Meta-Solver Strategy can affect the solving process and the obtained results. The focus of the evaluation is not to outperform any classical state-of-the-art, as this is not achievable with our currently available quantum simulations and hardware. Rather, we want to prove that our concept can reuse, extend, and combine classical techniques with quantum computing methods. More specifically, we want to answer the following research questions (RQ): * RQ1: How can classical state-of-the-art techniques be adapted in a Meta-Solver Strategy? * RQ2: How do Solution Paths with quantum subroutines perform compared to purely classical Solution Paths? §.§ Experiment Design To perform the evaluation, the Vehicle Routing Meta-Solver Strategy, as depicted in Figure <ref>, was implemented and integrated into the ProvideQ toolbox. The evaluation was performed on 23 capacitated Vehicle Routing Problems provided through the CVRPLIB website <cit.>, for which the optimal solution is already known. We utilized Set-P, provided by Augerat et al. <cit.>, due to its inclusion of both smaller (fewer than 25 cities) and larger instances (with more than 70 cities). The names of the problems indicate their major characteristics. The number of cities is indicated by n, while the number of available trucks is indicated by k. For instance, p-n50-k7 is a problem with 50 cities (one of which is the depot) and 7 trucks. The evaluation was conducted on a MacBook Pro from 2021 with an M1 chip and 32 gigabytes of RAM. The quantum steps were simulated, rather than executed on a quantum computer. Assuming that we only cluster once, we could create a total of six Solution Paths, four of which will be examined further: (1) No clustering + LKH-3 Solver, (2) 2-Phase TSP clustering + LKH-3 Solver, (3) 2-Phase TSP clustering + Qrisp QAOA Solver, (4) 2-Phase TSP clustering + D-Wave Annealer. Solution Paths (1) and (2) are purely classical, whereas Solution Paths (3) and (4) combine a classical clustering technique with hybrid QUBO solving techniques. We did not consider the VRP clustering in combination with the LKH-3 Solver, as for the selected benchmarking instances, a clustering was not necessary to successfully apply LKH-3. However, we did combine LKH-3 with the 2-Phase TSP clustering to enable a more insightful comparison between purely classical and hybrid solvers. This was necessary because the 2-Phase TSP clustering had to be applied to execute the QAOA and Annealing Solver. We did not consider the Phase Estimation Solver because it could only be applied to TSP instances containing a maximum of five cities. Even after applying the 2-Phase TSP clustering, the majority of derived clusters were larger than five cities, preventing the identification of solutions for all problems instances expect two. We faced a similar issue with the QAOA solver, and therefore decided only to include one of them in the data shown below because those results are not meaningful in the presented context. We also tried to address this issue by applying the clustering multiple times, but this resulted in highly inefficient solutions. All solution paths are executed five times to account for the varying computation times and non-deterministic steps of the algorithms. We compare them by measuring their solution quality and execution time. The results are shown in Figure <ref> and Figure <ref>. §.§ Results Answering RQ1, we demonstrated that existing monolithic classical solvers, such as LKH-3, can be repurposed for use in Meta-Solver Strategies. When employed as a standalone solver, LKH-3 yields highly satisfactory results and produces nearly optimal solutions in a matter of seconds for each problem instance. Furthermore, the Meta-Solver Strategy permitted the integration of the LKH-3 solver with a clustering approach. As anticipated, the incorporation of a clustering step typically results in a reduction in solution quality, yet it simultaneously leads to a significant reduction in computation times. These time savings may become even more pronounced in the context of more complex problem instances. It is evident that the standalone LKH-3 and LKH-3 with clustering offer distinct advantages when addressing the problem at hand. Standalone LKH-3 necessitates a greater investment of time, yet it is associated with the generation of superior solutions. In contrast, LKH-3 with clustering requires less time, yet it is associated with the generation of inferior solutions. By communicating this information to the user, it becomes possible for them to make an informed decision regarding the approach that best suits their needs. Answering RQ2, our Meta-Solving approach allows for the straightforward exchange of solvers and integration of quantum computing techniques. The 2-Phase TSP clustering was necessary for the retrieval of results, as the original problems are too large to be solved in a simulated environment. A direct comparison between LKH-3 and the quantum subroutines is not entirely fair, therefore we do not compare the standalone LKH-3 with any of the hybrid paths. Interesting observations can be made when comparing the LKH-3 with clustering and the annealer with clustering paths. Both apply the same clustering technique and start with the same input. While the simulated annealer always performs worse than LKH-3, there are some cases where its solution is very close. Even though the hybrid approach entails a significant amount of simulation overhead, it has been demonstrated that competitive solution can be retrieved in certain instances. However, the annealing approach has also exhibited instances where it performed considerably worse, particularly for larger problem instances, especially for the problems with more than 70 instances. The QAOA technique has an even greater simulation overhead than the annealing method, and therefore only yielded solutions for two problems. Both problems exhibited a favorable ratio between cities and trucks, with a greater number of trucks available than in other problems. Consequently, the 2-Phase clustering technique resulted in the formation of smaller clusters, which enabled the simulation to proceed. The results obtained by the QAOA approach were comparable to those obtained by LKH-3 and the Annealer. It can be concluded that Solution Paths containing quantum subroutines performed worse than purely classical solution paths, even when the same clustering method was applied to them. This result was anticipated and is largely attributed to the overhead introduced to enable the application of the quantum simulations. There is still a considerable effort needed before quantum methods can be considered competitive with their classical counterparts. Nevertheless, our meta-solving frameworks allow for the immediate utilisation of such methods once they become available. § DISCUSSION AND LIMITATIONS §.§ Limitations of Quantum Computing Meta-Solver Strategies that involve quantum computing steps are only as powerful as existing quantum algorithms and hardware allow. It is reasonable to imagine that quantum computers can outperform classical computers in certain tasks, and therefore reasonable to claim that we can exploit these advantages with our approach. However, in the current NISQ era, we are still far from achieving quantum supremacy in optimization. As a result, it may take years or even decades before the potential of quantum steps can truly shine. However, our Meta-Solvers are built on state-of-the-art classical techniques that can be used without quantum hardware. Thus, we built a foundational platform that will be ready for the future advances that quantum computing can bring. §.§ Orchestrational Overhead A key part of our Meta-Solving approach is the solution of multiple Meta-Solver Steps, each of which is executed by different types of solvers, and in many cases on different hardware, such as a quantum computer. Allowing highly customized solution path choices can have the disadvantage of over-decomposing and re-composing problems, leading to additional orchestration overhead. Monolithic state-of-the-art solvers require less orchestration and therefore save some computational overhead. However, even with this orchestration overhead, meaning parsing files into different formats or sending a solving request to a compute cluster, is usually only a small overhead. We argue that this overhead is negligible because it usually takes only a few seconds or even milliseconds to transform and send this kind of data. However, there is one exception to this argument, and that is the transformation of classical data into quantum circuits. Even if there is a problem formulation that is advantageous for a quantum computer, such as a parametrized circuit that represents an Ising model, there may still be significant overhead in finding an embedding for the quantum circuit on the actual hardware. These kinds of problems are often seen in quantum computing and are a problem that comes with the limitations of current NISQ hardware. We believe that embedding problems on a quantum computer will be much more efficient once scalable hardware, error mitigation, and better abstraction layers and compilers are available, so this problem should be solved in the future. §.§ Implementation of the Platform The design and implementation of Meta-Solver Strategies is a challenging task. First, we have to find mathematically reasonable decompositions, and then we have to implement highly efficient solvers (quantum and classical), analysis techniques for recommending Solution Paths, and user interface/user experience features to allow others to easily interact with our platform. We have already implemented a prototype of a Meta-Solving platform and shown that the concepts we have presented are feasible. However, creating an industry-ready platform that includes a wide variety of Meta-Solver Strategies still requires more work. § CONCLUSION This paper introduces a novel technique, Hybrid Meta-Solving, which fuses the strengths of classical and quantum optimization. It decomposes mathematical problems into multiple sub-problems and implements a software framework that enables the seamless exchange and extension of solvers for the sub-problems. The core concepts of Meta-Solving were introduced, including the semi-automated suggestion of Solution Paths based on user input and problem characteristics, the parallel execution and comparison of Meta-Solver Steps, and an automated backend selection. Additionally, the design and implementation of Meta-Solver Strategies were described, along with their varying degrees of generality and the improvement in reusability that capsuling strategies provide. The majority of our concepts were implemented in the ProvideQ toolbox prototype, which allows users to engage with Meta-Solver Strategies and select Solution Paths in interactive workflows. Our evaluation demonstrated that Meta-Solver Strategies can reuse existing state-of-the-art solvers and leverage excellent results for the Vehicle Routing examples. Furthermore, it allows us to configure different Solution Paths that leverage different advantages, such as high solution quality or rapid solution generation. We demonstrated that the incorporation of quantum algorithms is feasible, providing an accessible way to utilize quantum computing techniques. However, we also observed that Solution Paths that include quantum steps yield inferior results compared to purely classical Solution Paths, rendering the application of quantum computing impractical for these examples. We hope that further advances in the field of quantum computing will result in a change of these results, rendering quantum computing a more competitive or even superior alternative to classical state-of-the-art solvers. Our framework is capable of providing a fundamental platform for hybrid optimization and will become competitive once future advances in quantum computing are made. In future work, we intend to delve more deeply into the subjects of semi-automated Solution Path suggestions, the parallel execution of Meta-Solver Steps, and automated backend selection and circuit optimization. This paper presented the fundamental concepts underlying these techniques, but there is a great deal of research that is necessary to implement them in a general context. § ACKNOWLEDGMENT The authors would like to thank Ina Schaefer for her valuable input to discussions, and Lucas Berger for helping to implement the presented evaluation. This work has been supported by the German Federal Ministry for Economics and Climate Action (BMWK) in the projects ProvideQ (reference numbers: 01MQ22006D and 01MQ22006F) and QuaST (reference number: 01MQ22004D). We acknowledge that we used AI tools to improve the grammar of this paper. We used DeepL Write for basic text editing and occasionally prompted ChatGPT to help with repharsing sentences. Some icons in our Figures were created with Adobe Firefly. IEEEtran
http://arxiv.org/abs/2405.09352v1
20240515140058
On the impact of the antenna radiation patterns in passive radio sensing
[ "Federica Fieramosca", "Vittorio Rampa", "Stefano Savazzi", "Michele D'Amico" ]
eess.SP
[ "eess.SP", "cs.SY", "eess.SY" ]
On the impact of the antenna radiation patterns in passive radio sensing This work is funded by the EU. Grant Agreement No: 101099491 Federica Fieramosca, Vittorio Rampa, Senior Member, IEEE, Stefano Savazzi, Member, IEEE, and Michele D'Amico, Senior Member, IEEE F. Fieramosca and M. D'Amico are with Politecnico di Milano, DEIB department, e-mail: {federica.fieramosca,michele.damico}@polimi.it. V. Rampa and S. Savazzi are with Consiglio Nazionale delle Ricerche (CNR), IEIIT institute, e-mail: {vittorio.rampa,stefano.savazzi}@ieiit.cnr.it. May 20, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================== Electromagnetic (EM) body models based on the scalar diffraction theory allow to predict the impact of subject motions on the radio propagation channel without requiring a time-consuming full-wave approach. On the other hand, they are less effective in complex environments characterized by significant multipath effects. Recently, emerging radio sensing applications have proposed the adoption of smart antennas with non-isotropic radiation characteristics to improve coverage. This letter investigates the impact of antenna radiation patterns in passive radio sensing applications. Adaptations of diffraction-based EM models are proposed to account for antenna non-uniform angular filtering. Next, we quantify experimentally the impact of diffraction and multipath disturbance components on radio sensing accuracy in environments with smart antennas. empty EM body model, scalar diffraction, antenna radiation pattern, passive radio sensing, device-free radio sensing § INTRODUCTION Passive or device-free radio sensing is an opportunistic technique that employs stray ambient signals from Radio Frequency (RF) devices to detect, locate, and track people that do not carry any electronic device <cit.>. The effect of the presence of body obstacles on the received RF signals is a well-know topic in the wireless communications community <cit.>. However, only recently, radio sensing techniques have been proposed to provide sensing capabilities, while performing radio communication according to the Communication while Sensing paradigm <cit.>. Quantitative evaluation <cit.> of perturbations due to the presence or movements of people (the targets) has paved the way to the exploitation of electromagnetic (EM) models for passive radio sensing. In fact, the body-induced perturbations that impair the radio channel, can be acquired, measured, and processed using model-based methods to estimate location <cit.>, and tracking target info <cit.>, or to assess location accuracy during network pre-deployment <cit.>. However, a general EM model for the prediction of body-induced effects on propagation is still under scrutiny <cit.>, or too complex to be of practical use for real-time sensing scenarios <cit.>. Simpler human-body shadowing models have been recently proposed for Device-Free Localization (DFL) based on scalar diffraction theory <cit.>. Other semi-empirical models <cit.> have been also proposed for DFL applications <cit.>. However, these models require lengthy calibration pre-processing steps and will not be considered here (see <cit.> for references). § PAPER CONTRIBUTIONS Considering the interest in novel Wireless Local Area Network (WLAN) sensing systems <cit.> and tools <cit.>, with devices leveraging antennas with non-uniform <cit.>, and/or re-configurable <cit.> radiation characteristics, it is deemed necessary to develop effective EM models <cit.> that meet these emerging needs. Most of the previous tools were based on diffraction methods <cit.> and targeted devices equipped with omnidirectional antennas, with the exception of <cit.> that focused on human blockage at 73 GHz with the body represented as a semi-infinite rectangular shape and the paraxial approximation <cit.> being used. The key ideas discussed in this letter are: i) the proposal of a simple human-body shadowing model, that includes also the antenna directivity characteristics; ii) the application of the proposed model in passive radio sensing and validation of its predictive potential; and iii) the evaluation of the impact of antenna radiation patterns by exploiting real on-field measurements in an indoor reflective environment. The paper is organized as follows: Sect. <ref> presents a EM body model that includes the directional radiation pattern hypothesis while Sect. <ref> analyzes the body-induced effects in scenarios with mixed antenna systems (both directional and omnidirectional). Sect. <ref> validates the proposed body model in real field scenarios. Finally, Sect. <ref> draws some conclusions and proposes additional investigations. § EM BODY MODELS In this work, the statistical body model proposed in <cit.> for isotropic antennas is extended to take into account directional antennas with an assigned radiation pattern. We consider a single body, but the extension to multi-body scenarios can be inferred according to <cit.>. We also assume that the body is in the Fraunhofer's regions of the antennas of the transmitter (TX) and receiver (RX): the regions start ≈ 25 cm away from the directional and ≈ 15 cm from the omnidirectional antennas of the experimental setup shown in Sect. <ref>. As shown in Fig. <ref>, the 3-D shape of the human body is modeled as a 2-D rectangular absorbing sheet S <cit.> of height H and traversal size W, and has nominal position coordinates (x,y), w.r.t. the TX position, namely the projection of its barycenter on the horizontal plane. The scalar diffraction theory <cit.> quantifies the impact of this obstruction. First, a distribution of Huygens' sources of elementary area dS is assumed to be located on the absorbing sheet. Then, the electric field E=E_0-∫_SdE at the RX is obtained by subtracting the contribution of the Huygens' sources ∫_SdE from the electric field E_0 of the free-space scenario (with no target in the link area). Using the received electric field E_0 under free-space condition as reference, for both isotropic antennas, we get<cit.>: E/E_0=1-j d/λ ∫_S1/r_1 r_2 exp{ -j2π/λ(r_1+r_2-d)} dξ_2 dξ_3, where d is the link length, λ=c/f is the wavelength, while f is the frequency and c is the speed of light. Notice that each elementary source dS=dξ_2 dξ_3 has distance r_1 and r_2 from the TX and RX, respectively. The received power P is defined at the generic frequency f, omitted here for clarity, as: P={[ P_0+w_0 free-space only; P_0-A_S(x,y)+w_T with target S, ]. where A_S(x,y)=-10 log_10|E / E_0|^2 is the extra-attenuation due to the presence of S at coordinates (x,y). The free-space power P_0 is a constant that depends only on the link geometry and on the propagation coefficients: it is assumed to be known, or measured. The log-normal multipath fading and the other disturbances are modeled as the Gaussian noise terms w_0∼𝒩(0,σ_0^2), with variance σ_0^2, and w_T∼𝒩(μ_𝑇,σ_𝑇^2), with mean μ_T=Δ h_T and variance σ_𝑇^2=σ_0^2+Δσ_𝑇^2, respectively. Δ h_T and Δσ_T^2≥0 are the residual stochastic fading terms that depend on the specific scenario as in <cit.>. For a generic non-isotropic antenna, equation (<ref>) must be modified to take into account the antenna radiation pattern G(θ,φ)=G_0 f(θ,φ), where G_0 is the gain and f(θ,φ) is the normalized radiation pattern, while θ and φ are the polar coordinates, usually referred to the antenna phase center. First, we consider an isotropic RX antenna and a directional TX one that is pointed in the Line Of Sight (LOS) direction, with normalized radiation pattern f_t(θ_t,φ_t) and polar coordinates θ_t=θ_t(r_1,r_2) and φ_t=φ_t(r_1,r_2) w.r.t. the TX antenna phase center. The field ratio E/E_0 in (<ref>) becomes: E/E_0= 1-j d/λ ∫_S1/r_1 r_2 √(f_t(θ_t,φ_t)) · · exp{ -j2π/λ(r_1+r_2-d)} dξ_2 dξ_3. If the receiving antenna is also directional and pointed toward the transmitter in the LOS direction, the received signal can be calculated, with good approximation, by weighting the contributions from the elementary Huygens' sources with the square root of the receiving antenna radiation pattern. If V and V_0 are the complex voltages at the RX antenna connector in the actual scenario and in free-space, respectively, we get: V/V_0= 1-j d/λ ∫_S1/r_1 r_2 √(f_t(θ_t,φ_t) f_r(θ_r,φ_r)) · · exp{ -j2π/λ(r_1+r_2-d)} dξ_2 dξ_3, where θ_r=θ_r(r_1,r_2) and φ_r=φ_r(r_1,r_2) are the polar coordinates w.r.t. the receiving antenna phase center. Eq. (<ref>) is derived from (<ref>) by noting that V and V_0 are linearly dependent on E and E_0, respectively, through the effective antenna length. In this link configuration, the extra-attenuation for target in (x,y) is now given by A_S(x,y)=-10 log_10|V / V_0|^2. § BODY-INDUCED EFFECTS WITH MIXED ANTENNAS The measurement sessions took place in a hall with size 6.15 m × 14.45 m and floor-ceiling height equal to 3.35 m. As shown in Fig. <ref>, TX and RX nodes are spaced d=4.00 m apart, while the LOS is horizontally placed at h=0.99 m from the floor. Most surfaces are highly reflective, which cause poor DFL performances with omnidirectional antennas <cit.>. The goal is to verify the predictive capacity of the model in such complex conditions. The received power P is measured using a real-time Spectrum Analyzer (SA) <cit.> with a built-in tracking generator. The SA tracks N_f=401 frequency points equally spaced with Δ f=1.25 MHz and settings as in Tab. <ref>. In what follows, three scenarios are analyzed featuring: i) the omni-omni, where both TX and RX antennas are omnidirectional; ii) the omni-dir, where only the TX is equipped with a directional antenna; and iii) the dir-dir, where both antennas are directional. Directional antennas operate at frequency band 2.4-2.5 GHz: other specs <cit.> are summarized in Tab. <ref>. Omnidirectional antennas are vertical monopoles with 2 dBi gain. To compare the measurements against the model predictions, we modeled the body as an absorbing rectangular 2-D sheet with height H=2.0 m and traversal size W=0.55 m (see Fig. <ref>). The maximum transversal size (minor axis) of the first Fresnel's ellipsoid is about 0.70 m, while the beam width (at -3 dB) of each directional antenna is about 2 m at the same point (d_1=d_2=d/2). The free-space received power P_0(f_k) is obtained for each frequency of the set { f_k} _k=1^N_f. The received power P(f_k,ℓ) is then measured with the target located in each of the ℓ=1,...,75 marked positions of the grid points of Fig. <ref>. Each position ℓ has coordinates (x_ℓ,y_ℓ) with spacing 0.25 m along and 0.3 m across the LOS. The measured attenuation, due to the target in the ℓ-th position, is evaluated for each f_k as A_S,k^(m)(ℓ)=-10 log_10[P(f_k,ℓ) / P_0(f_k)] and then averaged to obtain the mean attenuation A_S^(m)(ℓ)=1/N_f ∑_k=1^N_fA_S,k^(m)(ℓ). The color-coded maps in Figs. <ref>.a,  <ref>.b, and  <ref>.c show the attenuation values for each subject position. For the omni-omni case of Fig. <ref>.a, the maximum value of attenuation is ≈4 dB. The body effect is thus negligible, except for positions very close to the antennas, due to a substantial amount of energy that reaches the RX antenna via multipath even if the first Fresnel's ellipsoid is blocked. On the contrary, in the dir-dir scenario of Fig. <ref>.c, the maximum attenuation reaches ≈ 16 dB, and the body presence near the LOS is clearly discernible. In fact, by using well-pointed directional antennas, the multipath impact is strongly reduced thanks to the angular filtering properties of the radiation patterns f(θ,φ). This scenario is thus closer to the ideal free-space environment with no disturbances. The omni-dir scenario of Fig. <ref>.b shows an intermediate behavior for some noticeable effects caused by multipath disturbances not filtered by the RX antenna. The maximum attenuation reaches ≈ 10 dB near the TX. Measurements and predictions for the omni (<ref>) and dir (<ref>) setups are compared in Fig. <ref> (bottom). The predictions are obtained by averaging A_S^(p)(ℓ)=1/N_p ∑_k=1^N_pA_S(x_ℓ+ x_k,y_ℓ+ y_k) over the attenuations A_S(·,·) corresponding to N_p small body movements around the marked positions ℓ. The goal is to let the models account for body position uncertainties as well as small, involuntary movements typically observed in human sensing <cit.>. We set x_k, y_k∼𝒰_-/2,/2 as uniformly distributed in the interval = 6 cm, and N_p=150. The measured A_S^(m)(ℓ) (dashed lines) and the predicted A_S^(p)(ℓ) (solid lines) average attenuations are compared w.r.t. 5 marked positions along two orthogonal cuts taken 0.25 m (orange lines) and 1 m (violet lines) from the TX antenna, respectively, with marks ℓ=1÷5 and ℓ=16÷20 (Fig. <ref>). The vertical bars include 60% of the measured values that cover the antenna operating band of 2.4-2.5 GHz (N_f=81). Accordingly, EM predictions are obtained for f_k in the same 2.4-2.5 GHz band but use the field ratio (<ref>) for omnidirectional antennas (square markers) and (<ref>) for directional ones (cross markers). Shaded areas include 60% of the attenuation samples used to obtain the average terms A_S^(p)(ℓ). Overall, the measurements reveal large fluctuations of the attenuations when the target is near the LOS path, while the dir-dir setup is close (on average) to the directional antenna predictions. In general, there is a negligible difference between omni and dir models when the target is far from the TX (x_ℓ>1 m) since the extra-attenuation is mainly due to the blockage of the first Fresnel's ellipsoid. Instead, a more marked difference is observed when the target moves close to the TX (x_ℓ=0.25 m) since the antenna beamwidth is now comparable with the Fresnel's area. The omni model over-estimates the attenuation obtained from the omni-omni setup due to the presence of multipath, as explained before. § BODY DETECTION AND MODEL VALIDATION We discuss here the problem of passive body localization in the environment previously analyzed. The detection problem focuses on the choice between the hypotheses F_0 and F_1 that correspond to the target outside or inside the Fresnel's ellipsoid of the link, respectively. According to Fig. <ref>, we split the 75 inspected positions in two groups: namely, the |ℒ_1|=L_1=25 positions (ℓ∈ℒ_1, blue crosses) that fall inside the Fresnel's ellipsoid, and the |ℒ_0|=L_0=38 positions (ℓ∈ℒ_0, red crosses) that fall outside. At time t, the decision whether the target is inside or outside the Fresnel's ellipsoid is based on the extra-attenuation A_S=P_0-P(t) that is observed w.r.t. the free-space power P_0 (in dBm). Omitting time t for clarity, the Log-Likelihood Ratio (LLR): Γ(A_S)=log[(A_S |F_1)/(A_S |F_0)] is used to discriminate (via thresholding on Γ) between both hypotheses. Probabilities (A_S |F_0)∼𝒩(μ_F_0,σ_F_0^2) and (A_S |F_1)∼𝒩(μ_F_1,σ_F_1^2) are log-normal distributed. The parameters μ_F_0 and μ_F_1 model the average attenuations terms, while σ_F_0=σ_0 and σ_F_1=σ_0+Δσ_T are the deviations. Assuming no prior information about the subject location, it is also (F_0)=(F_1)=1/2. Using the log-normal model (<ref>), (<ref>) can be rewritten as: Γ(A_S) = 1/2(A_S-μ_F_0/σ_F_0)^2- 1/2(A_S-μ_F_1/σ_F_1)^2-log(σ_F_1/σ_F_0). The LLR parameters are obtained from the predictions A_S^(p)(ℓ) of Sect. <ref>, namely μ_F_i≈μ_F_i^(p)=1/L_i ∑_ℓ∈ℒ_1A_S^(p)(ℓ) and σ_F_i≈σ_F_i^(p)=√(1/L_i ∑_ℓ∈ℒ_i[A_S^(p)(ℓ)-μ_F_i^(p)]^2), for hypotheses F_0 and F_1. The fading effects <cit.>, i.e. Δ h_T=0 are also neglected to highlight the diffraction terms only. For comparison, the LLR parameters are also obtained from measurements, μ_F_i≈μ_F_i^(m) and σ_F_i≈σ_F_i^(m), by replacing A_S^(p)(ℓ) with A_S^(m)(ℓ). In Fig. <ref>, we analyze the Receiver Operating Characteristic (ROC) figures<cit.>, using the LLR as in (<ref>), for all scenarios. The ROC associated to the dir-dir scenario is the one with the best performance, being closer to the EM model predictions. The trivial detector implements a random choice. Considering that ROC performances depend on the LLR decision regions, i.e. the separation of the log-likelihood (LL) functions <cit.>, in Fig. <ref>, we compare the LLs (A_S |F_1) and (A_S |F_0) for omni-omni (top) and dir-dir scenarios (bottom) obtained from experimental data (μ_F_i^(m),σ_F_i^(m)) and predictions (μ_F_i^(p),σ_F_i^(p)) using synthetic data, respectively. In Tab. <ref> we also report the average LL separation μ_F_0-μ_F_1 and the corresponding Kullback-Leibler (KL) divergence <cit.> using measured and predicted parameters. The decision regions for the dir-dir scenario are well separated (about μ_F_0^(m)-μ_F_1^(m)=11.9 dB) and this is confirmed by the dir model (<ref>) as μ_F_0^(p)-μ_F_1^(p)=9.6 dB. Similarly, a KL divergence of 2.44 is predicted against the measured one of 2.6. The decision regions for the omni-omni setup are almost overlapped, with average separation of 1.8 dB and negligible KL divergence due to the multipath effects and the absence of any angular filtering. Such effects are not captured by the omni model which performs poorly. § CONCLUSIONS This letter proposes a human-body model that accounts for antennas with non-isotropic radiation characteristics and evaluates the impact of the radiation pattern for passive radio sensing. Diffraction and multipath components, that contribute to radio sensing accuracy, are evaluated experimentally in an indoor environment with mixed antenna configurations. The angular filtering properties of directional antennas mitigate the multipath effects and make the propagation scenario closer to the results predicted by the diffraction-based EM model. Considering the problem of classifying target proximity, the model effectively predicts the separation of the decision regions, observed with directional antennas, for target inside or outside the Fresnel's ellipsoid. On the contrary, using omnidirectional antennas, the multipath effects dominate over diffraction and the model fails to predict such separation. Future works will adapt the proposed model to Wireless LAN sensing devices leveraging antennas with software re-configurable radiation characteristics. 10 youssef-2007 M. Youssef, et al., Challenges: Device-free passive localization for wireless environments, Proc. 13th Annu. ACM Int. Conf. Mobile Comput. Netw. (MobiCom), pp. 222–229, 2007. savazzi-2016S. Savazzi, et al., On the use of stray wireless signals for sensing: A look beyond 5G for the next generation of industry, Computer, vol. 52, no. 7, pp. 25-36, July 2019. brittain-1994J. E. Brittain, Albert Hoyt Taylor [Scanning the Past], Proc. IEEE, vol. 82, no. 6, pp. 958, Jun. 1994. krupka-1968 Z. Krupka, The effect of the human body on radiation properties of small-sized communication systems, IEEE Transactions on Antennas and Propagation, vol. 16, no. 2, pp. 154-163, March 1968. king-1977 H. King, et al., Effects of a human body on a dipole antenna at 450 and 900 MHz, IEEE Transactions on Antennas and Propagation, vol. 25, no. 3, pp. 376–379, May 1977. ghaddar-2004M. Ghaddar, et al., Human body modelling for prediction of effect of people on indoor propagation channel, Electronics Letters, vol. 40, no. 25, pp. 1592–31594, Dec. 2004. ghaddar-2007M. Ghaddar, et al., A conducting cylinder for modeling human body presence in indoor propagation channel. IEEE Transactions on Antennas and Propagation, vol. 55, no. 11, pp. 3099–3103, Nov. 2007. koutatis-2010G. Koutitas, Multiple human effects in body area networks, IEEE Antennas and Wireless Propagation Letters, vol. 9, pp. 938–941, 2010. wang-2015 Z. Wang, et al., A Diffraction Measurement Model and Particle Filter Tracking Method for RSS-Based DFL, IEEE Journal on Selected Areas in Communications, vol. 33, no. 11, pp. 2391–2403, Nov. 2015. kalt-2021 O. Kaltiokallio, et al. A Novel Bayesian Filter for RSS-Based Device-Free Localization and Tracking, IEEE Transactions on Mobile Computing, vol. 20, no. 3, pp. 780-795, March 2021. rampa-2017V. Rampa, et al., EM models for passive body occupancy inference, IEEE Antennas and Wireless Propagation Letters, vol. 17, no. 16, pp. 2517–2520, 2017. rampa-2022aV. Rampa, et al., Electromagnetic Models for Passive Detection and Localization of Multiple Bodies, IEEE Transactions on Antennas and Propagation, vol. 70, no. 2, pp. 1462–1745, 2022. shit-2019 R.C. Shit, et al., Ubiquitous Localization (UbiLoc): A Survey and Taxonomy on Device Free Localization for Smart World, IEEE Communications Surveys & Tutorials, vol. 21, no. 4, pp. 3532–3564, Fourthquarter 2019. kianoush-2016S. Kianoush, et al., Pre-deployment performance assessment of device-free radio localization systems, Proc. of the 2016 IEEE International Conference on Communications Workshops (ICC'16), pp. 1–6, 2016. hamilton-2014B. R. Hamilton, et al., Propagation modeling for radio frequency tomography in wireless networks, IEEE J. Sel. Topics Signal Process., vol. 8, no. 1, pp. 55–65, Feb. 2014. eleryan-2011A. Eleryan, et al., Synthetic generation of radio maps for device-free passive localization, Proc. of the IEEE Global Telecommunications Conference (GLOBECOM'11), pp. 1–5, 2011. nannuru-2012S. Nannuru, et al., Radio-frequency tomography for passive indoor multitarget tracking, IEEE Transactions on Mobile Computing, vol. 12, no. 12, pp. 2322–2333, 2012. guo-2014 Y. Guo, et al., An exponential-Rayleigh model for RSS-based device-free localization and tracking, IEEE Transactions on Mobile Computing, vol. 14, no. 3, pp. 484–494, 2014. access-2021A. Abdelgawwad, et al. A Trajectory-Driven 3D Channel Model for Human Activity Recognition, IEEE Access, vol. 9, pp. 103393-103406, 2021 wilson-2010J. Wilson, et al., Radio tomographic imaging with wireless networks, IEEE Trans. on Mobile Comp., vol. 9, no.5, pp. 621–632, May 2010. mohamed-2017M. Mohamed, et al., Physical-statistical channel model for off-body area network, IEEE Antennas and Wireless Propagation Letters, vol. 16, pp. 1516–1519, 2017. wang-2012J. Wang, et al., Device-free localisation with wireless networks based on compressive sensing, IET Communications, vol. 6, no. 5, pp. 2395–2403, Oct. 2012. sukor-2020A.S.A. Sukor, et al., RSSI-Based for Device-Free Localization Using Deep Learning Technique, Smart Cities, vol. 3, no. 2, pp. 444–455, 2020. rampa-2022bV. Rampa, et al., Electromagnetic Models for Device-Free Radio Localization with Antenna Arrays, Proc. of the IEEE-APS Topical Conference on Antennas and Propagation in Wireless Communications (APWC'22), pp. 1–5, Sept. 5–9, Cape Town, South Africa, 2022. halperin-2011D. Halperin, et al., Predictable 802.11 packet delivery from wireless channel measurements, ACM SIGCOMM Computer Communication Review, vol. 41, no. 4, p. 159–170, 2011. xie-2018Y. Xie, et al., Precise power delay profiling with commodity Wi-Fi, IEEE Transactions on Mobile Computing, vol. 18, no. 6, pp. 1342–1355, 2018. chauhan-2021S. Chauhan, et al., IEEE 802.11be: A Review on Wi-Fi 7 Use Cases, Proc. of the 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO'21), pp. 1–7, 2021. atif-2020M. Atif, et al., Wi-ESP—A tool for CSI-based Device-Free Wi-Fi Sensing (DFWS), Journal of Computational Design and Engineering, vol. 7, no. 5, pp. 644–656, Oct. 2020. zhang-2018L. Zhang, et al., DeFi: Robust Training-Free Device-Free Wireless Localization With WiFi, IEEE Transactions on Vehicular Technology, vol. 67, no. 9, pp. 8822–8831, Sept. 2018. shukri-2019S. Shukri, et al., Enhancing the radio link quality of device-free localization system using directional antennas, Proc. of the 7th International Conference on Communications and Broadband Networking, pp. 1–5, Apr. 2019. garcia-2020D. Garcia, et al., POLAR: Passive object localization with IEEE 802.11 ad using phased antenna arrays, Proc. of the IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pp. 1838–1847, Jul. 2020. santoboni-2022M. Santoboni, et al., Wireless LAN Sensing with Smart Antennas, Proc. of the 16th European Conference on Antennas and Propagation (EuCAP'22), pp. 1–5, Mar. 27–Apr. 01, Madrid, Spain, 2022. maccartney-2016G. R. MacCartney, et al., Millimeter-wave human blockage at 73 GHz with a simple double knife-edge diffraction model and extension for directional antennas, Proc. of the IEEE 84th Vehicular Technology Conference (VTC-Fall'16), pp. 1–6, Sept. 2016. rampa-2019 V. Rampa et al., Dual-target body model for device-free localization applications , Proc. of the IEEE-APS Topical Conference on Antennas and Propagation in Wireless Communications (APWC'19), pp. 1–6, Sept. 2019. mokhtari-1999H. Mokhtari, et al., Comparative study of lateral profile knife-edge diffraction and ray tracing technique using GTD in urban environment, IEEE Transactions on Vehicular Technology, vol. 48, no. 1, pp. 255–261, 1999. durgin-2009 G. Durgin, The practical behavior of various edge-diffraction formulas, IEEE Antennas Propag. Mag., vol. 51, no. 3, pp. 24–35, Jun. 2009. tek RSA500 Series Real Time Spectrum Analyzers, [online], 2022, accessed: Sep. 27, 2022. Available: <https://www.tek.com/en/products/spectrum-analyzers/rsa500>. antenna_spec TP-LINK, User Guide, TL-ANT2409A 2.4 GHz 9 dBi Directional antenna, Rev. 1.0.2, 7106506238, [online], 2016, accessed: May 16, 2023. Available: <https://static.tp-link.com/res/down/doc/TL-ANT2409A_V1_UG.pdf>. ROC T. Fawcett, An introduction to ROC analysis,Pattern Recognit. Lett., vol. 27, pp. 861–874, 2006. kullKullback S, Leibler RA “On information and sufficiency,” Ann Math Stat 22:79–86, 1951.
http://arxiv.org/abs/2405.08679v1
20240514150009
Investigating Design Choices in Joint-Embedding Predictive Architectures for General Audio Representation Learning
[ "Alain Riou", "Stefan Lattner", "Gaëtan Hadjeres", "Geoffroy Peeters" ]
cs.SD
[ "cs.SD", "cs.AI", "cs.LG", "eess.AS" ]
Achieving Fairness Through Channel Pruning for Dermatological Disease Diagnosis Qingpeng Kong1 Ching-Hao Chiu2 Dewen Zeng1 Yu-Jen Chen2 Tsung-Yi Ho3 Jingtong hu4 Yiyu Shi1^() May 20, 2024 ======================================================================================================== This paper addresses the problem of self-supervised general-purpose audio representation learning. We explore the use of jepa for this task, which consists of splitting an input mel-spectrogram into two parts (context and target), computing neural representations for each, and training the neural network to predict the target representations from the context representations. We investigate several design choices within this framework and study their influence through extensive experiments by evaluating our models on various audio classification benchmarks, including environmental sounds, speech and music downstream tasks. We focus notably on which part of the input data is used as context or target and show experimentally that it significantly impacts the model’s quality. In particular, we notice that some effective design choices in the image domain lead to poor performance on audio, thus highlighting major differences between these two modalities. Self-supervised learning, Momentum Encoder, Masked Image Modeling, Audio Representation Learning, Joint-Embedding Predictive Architecture § INTRODUCTION Self-supervised learning techniques are now becoming essential in training powerful models <cit.> as they allow to create meaningful representations leveraging large quantities of unlabeled data. These representations can then be used as inputs for various small models to solve several downstream tasks, thus preventing the need to train one huge model per task. To learn such representations, most methods optimize a contrastive objective using inputs that share semantic information as positive pairs <cit.>. However, these approaches usually require many negative samples to work, making them computationally expensive. To tackle this issue, <cit.> propose to use a context and target networks, which enables to train the model with only positive pairs artificially created using semantic-preserving data augmentations or masking. In particular, recent methods <cit.> mask information from the data and train a Transformer <cit.> to predict information about the missing part of the input from the visible one, which enforces the model to learn high-level features that capture underlying semantic information. Although initially proposed for computer vision applications, this method is agnostic to the data domain and has also been applied to general-purpose audio representation learning <cit.>. In particular, recent approaches manage to learn strong and general representations suited for both environmental sounds, speech and music downstream tasks by relying on those principles. These methods induce many design choices, and we propose to study some of them in this paper. We specifically investigate the effect of the masking strategy (see Figure <ref>), as well as the influence of the duration of the audios provided to the model during training. We evaluate the effectiveness of these choices by conducting extensive experiments over a wide variety of datasets and downstream tasks, thus confirming their relevance for audio representation learning. Finally, we provide our training and evaluation procedures, along with pretrained models, for facilitating further research in general-purpose audio representation learning[<https://github.com/SonyCSLParis/audio-representations>]. § RELATED WORK To circumvent the lack of annotations, most self-supervised learning methods consist of applying transforms to data to create inputs that share semantic content artificially. To do so, a standard approach is to randomly apply data augmentations to inputs and train a Siamese network <cit.> to learn similar latent representations for these transformed inputs. To prevent representation collapse (i.e., all inputs being mapped to the same latent representation), a common technique is to add a repulsive term to the model's objective by treating the elements in a batch as negative samples <cit.> or by directly optimizing some batch-wise statistics <cit.>. Another solution consists of adding a predictor after one branch of the Siamese network and a stop-gradient operator after the other one <cit.>, which has been shown to prevent representation collapse implicitly <cit.>. While initially proposed for images, all these methods have also been successfully applied to audio representation learning as well <cit.>. Nevertheless, data augmentations are designed for specific modalities and may discard valuable information from the learned representations, such as color or pitch in the image and audio domains respectively. Instead of using such handcrafted transforms, one can partially mask the input and train the model to reconstruct the masked part given the visible one, either in the domain space <cit.> or in the latent space <cit.>. This masking strategy is modality-agnostic, leads to general representations and is highly compatible with Transformers <cit.>, masked patches being analogous to masked tokens in Masked Language Modeling <cit.>. In particular, m2d <cit.> combines an asymmetric architecture with this masking approach to produce general-purpose audio representations. § JOINT-EMBEDDING PREDICTIVE ARCHITECTURES Our method is strongly inspired from M2D <cit.> and relies on a Joint-Embedding Predictive Architecture <cit.>. The overall idea is to train a model to generate context and target representations from an input and to be able to predict target representations from the context ones, as depicted on Figure <ref>. §.§ Training method We first convert the input audio waveform into a log-scaled mel-spectrogram and divide it into a sequence of N non-overlapping patches _1, …, _N over the time and frequency axes. Then, we sample two disjoint sets of indices , ⊂{1, …, N} and build a context (resp. target) sequence _ = (_j)_j ∈ (resp. _ = (_j)_j ∈), i.e. we mask to produce two sequences of patches _ and _ (see Figure <ref>). After this, the context sequence _ is fed to a vit f_θ of parameters θ, called context encoder, to produce patch-level context representations _ = (_j)_j ∈ <cit.>. Analogously, target representations _ = (_j)_j ∈ are computed by passing _ through a target encoder f_θ̅, whose parameters θ̅ are updated using an ema of those of the context encoder: θ̅_t = τ_t θ̅_t-1 + (1 - τ_t) θ_t where the ema rate τ_t is linearly interpolated between τ_0 and τ_T, T being the total number of training steps. Finally, a predictor g_ϕ is conditioned on the context representations and the target's positions to generate a sequence = (_j)_j ∈, i.e. = g_ϕ(_, ). Concretely, we concatenate positional encodings[As in <cit.>, positional encodings are sinusoidal positional embeddings added to a shared learnable mask embedding.] that indicate the location of the target patches to the context representations _ and pass the resulting sequence through g_ϕ, which is also a vit. We then select the positions corresponding to the targets in the output sequence to build . The parameters (θ, ϕ) of the context encoder and predictor are updated through gradient descent by minimizing the distance ℒ between and the target sequence _: ℒ = 1||∑_j ∈ d(_j, _j) where d denotes the smoothed L_1 distance: d(, ') = 1/2 - ' _2^2 if - ' _1 < 1 - ' _1 - 1/2 otherwise We indeed found it to work slightly better than the commonly used normalized L_2 distance <cit.> in our preliminary experiments. §.§ Sampling context and target blocks We investigate different masking strategies for sampling context and target sets and (see Figure <ref>). Unstructured sampling. The easiest and most common strategy consists in sampling the target patch indices as a random subset of {1, …, N}, then using the remaining patches as context, i.e., = {1, …, N}∖, as e.g. in <cit.>. Multi-block sampling. In I-JEPA <cit.>, the authors suggest sampling targets independently as several contiguous blocks instead of an unstructured set of patches. Then, the context is sampled as a big contiguous block from which patches already in the target set are removed. This strategy significantly improves downstream classification performances on images (+36% on ImageNet, see Table 6 of <cit.> for details), we therefore investigate its effectiveness on mel-spectrograms. Time sampling. Unlike images where elements are spatially related both vertically and horizontally, (mel-)spectrograms depict audio events spanning multiple frequency bins. To account for this distinction, we propose sampling context and target patches exclusively in the time dimension, encompassing the entire spectrum corresponding to those times. We yet divide the frequency axis into several patches for our model to use positional encodings as indicators of the frequency range. § EXPERIMENTS §.§ Experimental setup To specifically measure the influence of the design choices we study, we keep the architecture and hyperparameters unchanged across our different experiments. We choose the ones from Masked Modeling Duo (M2D) <cit.> as it is very close from our framework. Model architecture. The audio encoders f_θ and f_θ̅ are vanilla ViT-Base <cit.> with Flash-Attention <cit.>, 12 predicting heads and 12 768-dimensional layers. The predictor g_ϕ is a narrow vit with 16 heads and 8 512-dimensional layers. The size of the patches _j is fixed to 16 × 16. Input representation. The input audio is subsampled to 16 kHz and then converted into a log-scaled mel-spectrogram with the same settings as in <cit.> (80 mel-bins ranging from 50 to 8000 Hz). Since the encoder only accepts fixed-size inputs, we pad or crop the model's inputs to 208 frames, representing each mel-spectrogram as a sequence of (80 / 16) × (208 / 16) = 65 = N patches. We discover the choice of this parameter to be critical and study its influence in section <ref>. Pre-training details. We train our model on the unbalanced train segment of AudioSet <cit.>, which contains about 2 million 10-second-length audio files, during 300 epochs with a base learning of 0.0003 and AdamW <cit.> as optimizer. All hyperparameters are kept identical to the ones in <cit.>. Linear evaluation details. To evaluate how general the representations learned by our model are, we train a linear classifier on top of the frozen pre-trained context encoder (without any masking) and evaluate it on eight downstream classification tasks, including various environmental, speech, and music datasets (see Table <ref>). Since our ViT encoder does not return one but a sequence of 768-d embeddings, we compute latent representations by concatenating them along the frequency axis, and then we average-pool along the time dimension to get a single 3840-d vector is then fed to the classifier. §.§ Results In Table <ref>, we present our method's performance on linear downstream classification tasks and investigate the impact of the duration of the audio samples used during training. We observe a strong, task-dependent correlation between training sample duration and performance: tasks like environmental sounds or music genre recognition benefit from a wider context, while those emphasizing short-time semantic information (instrument, pitch or word identification) show better results with shorter segments, as observed also in <cit.>. To highlight the effect of our design choices, we compare our models to several state-of-the-art methods. We focus on fully self-supervised methods that do not rely either on fine-tuning, ensembling or knowledge distillation, and we use the same code for evaluating all methods[<https://github.com/nttcslab/eval-audio-repr>], ensuring fair comparisons. Notably, ATST <cit.> consistently performs worse on all tasks except ESC-50, highlighting the superiority of masking over data augmentations for self-supervised audio representation learning. Moreover, our models emerge as top performers in two tasks and second-best in another, surpassing alternative methods. Specifically, the one with a duration parameter of d = 2.1 s outperforms all baselines in three out of eight tasks while demanding much less GPU memory. §.§ Influence of the masking strategy In Table <ref>, we investigate the influence of the various masking strategies described in section <ref>. In contrast to findings in image representation learning <cit.>, where multi-block masking greatly increases the downstream performances by favoring local connectivity, we observe that unstructured masking significantly outperforms the alternatives across all tasks for mel-spectrograms. This discrepancy can be attributed to the wide frequency range covered by audio events, rendering the advantages of local connectivity less relevant in this context. Finally, we investigate the impact of masking targets only in the latent domain: the target encoder f_θ̅ can then use the entire input to compute the target representations _, which has been found beneficial in I-JEPA <cit.>. In contrast to this approach's efficacy for images, we observe in Table <ref> that not masking the target mel-spectrograms before passing them through the target encoder actually degrades the quality of the learned representations. Interestingly, though, with these settings, the multi-block strategy outperforms the unstructured one for a few downstream tasks, as observed for images <cit.>. § CONCLUSION In this study, we explore various design choices for jepas in the context of general-purpose audio representation learning. By examining the impact of masking strategies and target masking, we empirically demonstrate that optimal design choices differ between audio and image domains. Specifically, the effective multi-block masking strategy introduced in <cit.>, advantageous for local connectivity in natural images, proves less suitable for mel-spectrograms. Moreover, our experiments emphasize the importance of the duration of training audio samples, showing varied effects depending on the downstream task. Notably, longer context negatively impacts representations in certain tasks. This observation reveals the need for further exploration to enhance the adaptability of audio representation learning architectures to different temporal scales. Overall, ViT architectures exhibit notable performance across various audio downstream tasks, and our findings highlight the relevance of continuing research in this direction to improve general-purpose audio representation learning. § ACKNOWLEDGMENTS This work has been funded by the ANRT CIFRE convention n°2021/1537 and Sony France. This work was granted access to the HPC/AI resources of IDRIS under the allocation 2022-AD011013842 made by GENCI. We would like to thank the reviewers and meta-reviewer for their valuable and insightful comments. IEEEbib
http://arxiv.org/abs/2405.09230v1
20240515101830
Reduce to the MACs -- Privacy Friendly Generic Probe Requests
[ "Johanna Ansohn McDougall", "Alessandro Brighente", "Anne Kunstmann", "Niklas Zapatka", "Hannes Federrath" ]
cs.CR
[ "cs.CR", "cs.NI" ]
J. Ansohn McDougall et al. University of Hamburg, Hamburg, Germany {johanna.ansohn.mcdougall, firstname.lastname}@uni-hamburg.de University of Padova, Padua, Italy alessandro.brighente@unipd.it Reduce to the MACs - Privacy Friendly Generic Probe Requests Johanna Ansohn McDougall10000-0001-7457-2181 Alessandro Brighente20000-0001-6138-2995 Anne Kunstmann10009-0009-8990-9571 Niklas Zapatka10009-0005-3841-1475 Hannes Federrath1 =================================================================================================================================================================================== [remember picture, overlay] [font=, yshift=-1cm, text centered, text width=, anchor=north west] at (current page.north west) This version of the contribution has been accepted for publication, after peer review but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record of this contribution will be published in the proceedings of the 39th IFIP TC 11 International Conference (IFIP SEC 2024) and will be available online shortly. Use of this Accepted Version is subject to the publisher’s Accepted Manuscript terms of use <https://www.springernature.com/gp/open-research/policies/accepted-manuscriptterms> ; Since the introduction of active discovery in Wi-Fi networks, users can be tracked via their probe requests. Although manufacturers typically try to conceal Media Access Control (MAC) addresses using MAC address randomisation, probe requests still contain Information Elements (IEs) that facilitate device identification. This paper introduces generic probe requests: By removing all unnecessary information from IEs, the requests become indistinguishable from one another, letting single devices disappear in the largest possible anonymity set. Conducting a comprehensive evaluation, we demonstrate that a large IE set contained within undirected probe requests does not necessarily imply fast connection establishment. Furthermore, we show that minimising IEs to nothing but Supported Rates would enable 82.55% of the devices to share the same anonymity set. Our contributions provide a significant advancement in the pursuit of robust privacy solutions for wireless networks, paving the way for more user anonymity and less surveillance in wireless communication ecosystems. Probe Requests, Wi-Fi Tracking, Privacy-Preserving Technologies § INTRODUCTION Wireless communication enables seamless connectivity and data exchange among devices. Particularly the ubiquitous use of active discovery has raised serious concerns regarding user privacy: Probe requests are actively sent by devices attempting to join a network, and the first step required for connection establishment between a user's device and an Access Point (AP). Albeit modern devices typically use MAC address randomisation and omit to send known Service Set Identifiers (SSIDs) unless searching for hidden networks <cit.>, the information contained within the IE of the probe request still serves as a device fingerprint <cit.>. This fingerprint can be used to track device users, approximate the amount of people in a specified area and even trilaterate and thereby locate the signal origin up to an accuracy of 1.5 m <cit.>. In response to these limitations, this paper introduces generic probe requests: We propose the reduction of IE content to the bare minimum. We study the implications of generic probe requests for functionality, privacy, device identification, and tracking prevention. To this end, we contribute the following: * We analyse the IE of probe requests and determine the minimal information required to receive probe responses. Our results show that the SSID and Supported Rates field are sufficient to receive valid probe responses. * We analyse the impact of a reduced IE on functionality, privacy and security. * We propose Generic Probe Requests and show that they provide a sufficiently large anonymity set, encompassing 82.55% of the probe requests. Thanks to the reduced content of their IEs, Generic Probe Requests resist correlation attacks undermining user privacy. Our results show that Generic Probe Requests make devices as indistinguishable as possible, defending users from attacks targeting device fingerprinting via the IE content while simultaneously having no impact on the actual connection establishment, since they are only used during active discovery. While various publications have proposed to reduce IE content and randomise sequence numbers <cit.>, to the best of our knowledge, we are the first to study the implications of minimising and unifying probe requests to the maximum. This paper is structured as follows: In <ref>, we provide a background on MAC addresses, probe requests and connection establishment in Wi-Fi networks. Additionally, we introduce the Time-to-Traffic metric. We present Related Work in <ref> and the attacker model in <ref>. <Ref> introduces the concept of generic probe requests and analyses its impact on functionality, privacy and security. Finally, <ref> concludes the paper. § BACKGROUND This section gives a background on MAC addresses, probe requests and Wi-Fi connection establishment. In addition to that, we introduce the Time-to-Traffic metric that is used in our experiments. §.§ MAC Addresses A MAC address is a 48-bit network address used in Wi-Fi networks <cit.>. It serves to identify frame senders and receivers on the data link layer. Every Network Interface Controller (NIC) is assigned a fixed and globally unique Universally Administered Address (UAA) by its manufacturer. The first three bytes, the Organisationally Unique Identifier (OUI), identify the manufacturer. The last three bytes are assigned by the manufacturer. The UAA can be substituted by a Locally Administered Address (LAA). UAAs and LAAs are distinguished by the U/L-bit contained in every MAC address, which is the second-least significant bit of the most significant byte of a MAC address <cit.>. An example of LAAs are randomised MAC addresses, used to conceal the UAA while sending probe requests to prevent tracking via the MAC address. §.§ Probe Requests A mobile device can identify known networks via active or passive scanning. During passive scanning, APs broadcast beacon frames containing their SSID every 102.4 ms. Clients scanning the beacons can initiate the connection establishment upon recognising a known SSID <cit.>. The alternative, active scanning, is more commonly used because of its reduced overhead <cit.>. Here, clients broadcast bursts of probe requests: a set of bundled requests sent within a short time and transmitted over several channels <cit.>. Modern devices typically send undirected probe requests containing an empty SSID tag. An AP receiving such a probe request responds with a probe response containing its SSID. Directed probe requests contain SSIDs of known networks. They are required to locate hidden networks, which do not advertise via beacons and only respond to directed probe requests. Devices running outdated Operating Systems (OSes), or ones misconfigured by their users, may also include SSIDs in their probe requests <cit.>. The transmission of probe requests is initiated by the MAC sublayer management entity (MLME), which invokes layer management functions. The MLME initiates a scan via the MLME-SCAN.request primitive, which also determines the content of the probe request <cit.>. This content can the be surveyed in the IE. While a large range of parameters can be transmitted via the IE <cit.>, devices typically transmit a select few IE tags. These can include, but are not limited to the Supported Rates, Extended Supported Rates and the transmission channel (DS Parameter). Other common fields are High Throughput (HT), Very High Throughput (VHT) and High Efficiency (HE) Capabilities, advertising support for the IEEE 802.11n, 802.11ac and 802.11ax standard respective. Extended Capabilities declare support for additional features beyond HT and VHT capabilities. The Interworking field enables seamless connectivity in heterogeneous network environments (IEEE 802.11u). Another common field contains vendor specific information. §.§ Connection Establishment in Wi-Fi Networks Wireless local area networks (WLANs) are based on the IEEE 802.11 standard <cit.>. Each WLAN is identified by its SSID, and mobile devices can identify known networks via network discovery (cf. <ref>). Subsequent connection establishment with an AP is done in several steps: first, IEEE 802.11 authentication is performed to grant a client access to the network. This encompasses two authentication and two association frames, in which the client and AP are paired and communication parameters and standard extensions negotiated. A successful association enables data transfer of frames of higher layers. Subsequently, WPA, WPA2 and WPA3 use Robust Security Network Association (RSNA), a suite of protocols for authentication and encryption. Upon successful completion, WLAN access is granted, and encrypted data frames can be exchanged. While the term WLAN describes the standard, the Wi-Fi Alliance instead promotes the use of the term Wi-Fi, a trademark protecting all products certified to their Wi-Fi interoperability <cit.>. Since the term Wi-Fi is commonly used in anglophone publications, we adhere to this de-facto standard in this paper. §.§ Time-to-Traffic We introduce the Time-to-Traffic (TtT) metric for measuring the duration of the connection process. With it, we measure the time between the last probe response received before the MAC address changes to the LAA, and the first data frame exchanged after connection establishment. Its duration can be observed in <ref>: Packet 6, marked in blue, is the starting point of the TtT. The endpoint of the measurement is the first data frame, packet 19. This can be explained as follows: a device probing for nearby networks usually uses a randomised MAC address. The probe requests in this stage are typically undirected and therefore do not contain an SSID. Once a device identifies a known network via a probe response, it switches its MAC address to its UAA or a fixed LAA per network. Typically, further probes are transmitted via this address, sometimes first undirected and then directed, sometimes either of the two. Subsequently, connection establishment is initiated. Some devices mostly omit these directed or undirected probe requests using the LAA, and immediately initialise the connection after receiving a probe response from a known network. To receive comparable results, we therefore choose to measure the TtT beginning with the last probe response to the randomised MAC address. § RELATED WORK To track a mobile device over a longer period of time, it is necessary to apply de-randomisation techniques to probe requests to correlate the bursts originating from different MAC addresses with a single device. The research on this topic can be split into two competing perspectives: attacks, and countermeasures. §.§ Attacks Linking bursts to devices requires a unique pattern in the requests, e.g. the globally unique UAA <cit.>. When a device uses a regularly changing LAA, other fingerprinting attacks are possible. The main focus then lies in fingerprinting using IEs and sequence numbers. After the theoretical risk was noticed <cit.>, a first empirical assessment was provided <cit.>. This attack was improved by taking the MAC address into account <cit.>. Subsequently, other IE fields were shown to be usable for fingerprinting devices <cit.>. Other approaches to defeat MAC address randomisation model probe request frame association in a flow network and use minimum-cost flow optimisation <cit.>, or combine IE fingerprinting with clustering approaches <cit.>. An additional source of information for device fingerprinting are timing attacks: Pattern in request transmission times were shown to be specific to the device driver <cit.>. This attack was subsequently extended to single out individual devices <cit.>. Using a timing attack, it is possible to identify a device's hardware by measuring channel switch times during scanning <cit.>. §.§ Countermeasures Albeit fingerprint techniques are improved continuously, manufacturers are slow to deploy countermeasures. The solutions are mainly academic, e.g. to rely solely on passive discovery, which decelerates the connection establishment by only 0.6 seconds <cit.>. This can be improved by accelerating passive scanning <cit.>. While this is a valid approach in most cases, hidden networks can only be reached via directed probe requests. By transmitting the SSID as a hash, combined with the current MAC address and sequence number of the packet, the use of hidden networks is possible in a privacy friendly manner <cit.>. Improvements to active scanning include the use of MAC address randomisation, avoiding unique MAC addresses by using a regularly changing, randomised LAA <cit.>. An alternative is MAC address masquerading, where the own UAA is replaced by the MAC address of a nearby device <cit.>. Fingerprinting could be further hampered by minimising transmitted IEs and randomising sequence numbers <cit.>, and introducing randomness into the sending pattern of single packets and bursts <cit.>. The use of cryptographic measures for content protection of probe requests would create an overhead of 2 seconds for the key exchange and 0.5 seconds for the transmitted probe request <cit.>. To assess the countermeasures deployed in mobile devices, a comparison of 160 mobile devices manufactured between 2012 and 2020 revealed that MAC address randomisation has been widely adopted, and sequence number randomisation is emerging. However, IE fingerprinting is still possible <cit.>. Our proposal to employ generic probe requests enhances these countermeasures: The IE minimisation maximise the anonymity set size and ensures that tracking via the content of probe requests is impeded. § ATTACKER MODEL We consider a passive attacker. Their objective is to track the movements of individuals within a public area, such as a campus or shopping centre. This is accomplished by eavesdropping on the probe request bursts emitted by smartphones during active scanning. The adversary has installed a sufficient number of Wi-Fi receivers in the area to triangulate their targets' positions. We assume that the attacker can link probe request bursts to individual devices by exploiting unique patterns, e.g. the UAA. If a device uses MAC address randomisation, a pattern may still be derived by fingerprinting the associated IEs. If two probe requests exhibit the same patterns, the attacker assumes that they originate from the same device. We assume the attacker is thereby capable of associating probe requests to their sending devices and can track single users. § GENERIC PROBE REQUESTS Various attacks to track devices despite MAC address randomisation utilise IE-fingerprinting techniques <cit.>. Our approach to counter them hence lies in reducing the content of the IE to the bare minimum. To gauge how far the content of a probe request can be reduced, we transmit probe requests via a USB Wi-Fi adaptor using Scapy[<https://scapy.net/>], an interactive program for packet manipulation in Python. Two more Wi-Fi adaptors are used to monitor the traffic via Wireshark on the non-overlapping channels 1 and 6 of the 2.4 GHz bands, which are the most frequently used channels in the vicinity of our experimental setup. The script used to send probe requests can be found on GitHub[<https://Github.com/heddha/scapy-probe-requests>]. observ[section] observ[2][] observ linecolor=mypurple!20, linewidth=2pt,topline=true, frametitleaboveskip=- [] Observation : observ1 By removing more and more fields from the IE of the probe request, we determine that the SSID and Supported Rates field are the only ones necessary for probe requests. Neither of these fields is required to contain legitimate information in order to allow for appropriate reception of probe responses. Since the content of the IE is typically used for tracking, we propose to reduce it to the bare minimum established in this experiment. To evaluate the feasibility of this approach, we subsequently evaluate the time required for connection establishment depending on the content of the IE to determine whether a large IE causes faster connection establishment. §.§ IE Content Analysis In theory, IEs in Probe Requests serve clients to communicate their capabilities to nearby APs. This information helps the APs to respond appropriately and efficiently, ensuring that the client can connect to the network in a manner aligning with its requirements and the network's capabilities. To estimate whether transmitting capabilities using a complex IE field accelerates the connection establishment, we compare the connection establishment times of five different devices and correlate them with the size of their IE. The devices used in our tests are a mobile phone (iPhone SE 2020 running iOS 17), a Wi-Fi dongle (D-Link DWA-171 Nano Wi-Fi USB Adaptor), a laptop running an Intel AC 8265 wireless card, a single-board computer (Raspberry Pi Model 3B+), and a tablet (iPad Pro running iPadOS 15). They represent typical household appliances. The tests are conducted as follows: We connect our devices to an access point, which we turn on at non-fixed intervals. We measure how long it takes the devices to reconnect to the AP. To measure whether a complex information element causes quicker connection establishment, we utilise the TtT metric introduced in <ref>. We conduct the tests 7 times per device. The results of the experiment can be seen in <ref>. All devices transmit a minimum of Supported Rates and Extended Supported rates. The probe requests of the DWA-171 adaptor have a total frame size of 68 bytes, 12 bytes of which made up by Supported Rates and Extended Supported Rates. The TtT required is 3.71 seconds on average. The Intel 8265 laptop transmits undirected probe requests with a total frame size of 108 bytes. The TtT is 1.92 seconds on average, and the IE additionally contains channel information, HT Capabilities, and vendor-specific information. The frame size of the iPhone is 139 bytes, and its TtT 2.77 seconds. In addition to the channel and HT Capabilities, it transmits 35 bytes of Extended Capabilities and HE Capabilities. The iPad Pro transmits 152 bytes long probe request frames and has a TtT of 2.54 seconds. The largest IE body is contained in the undirected probe requests of the Raspberry Pi, with 236 bytes and a TtT of 2.73 seconds. As this comparison shows, the Intel 8265 card is by far the fastest despite the second-smallest frame size. This is also represented by its median of 1.67. The Intel 8265's standard deviation is the second smallest of all with 0.73. The iPad follows as the second-fastest, however, its standard deviation of 1.17 is higher than both the iPhone or the Raspberry Pi (with the largest frame body). The DWA-171 antenna is the slowest to establish a connection and also has the smallest frame size. With a median of 3.69, it has the smallest standard deviation of only 0.16. The results show that there does not appear to be a direct correlation between the amount of information transmitted in undirected probe requests and the speed of the connection establishment after the device discovery. The underlying reasons for fast connection establishment can likely be found in optimised implementations, and the very slow connection establishment of the DWA-171 might be attributed to the slow data transmission via a USB 2.0 Type A wireless dongle. On the other hand, <ref> also allows for a comparison of the frame sizes of directed probe requests: Directly before connection establishment between client and AP as well as after the client has switched to the LAA, it often sends one or more directed probe requests. A directed probe request from the DWA-171, the iPad, and the Raspberry Pi contains no additional information besides the SSID. The iPhone transmits 184 instead of 139 bytes in its directed probe request, with the additional bytes concerning three vendor-specific tags and the SSID. The Intel 8265 laptop transmits a directed probe request of 277 bytes, with the additional information concerning Extended Capabilities, Mesh ID, FILS request parameters, and three vendor-specific tags containing information on WPS, P2P, and multi-band operation. While this does not necessarily have to correlate, comparing the connection establishment with respect to the IE size of directed probe requests could be interesting in future work. §.§ Proposition: Minimisation of IE content In order to not impede any significance that directed probe requests might have on communication establishment, we propose generic probe requests, in which the IE content of undirected probe requests is reduced to the bare minimum: By modifying the IE to only contain the Supported Rates and (empty) SSID fields, user privacy can be increased immensely, while simultaneously satisfying only the minimum requirements for legitimate IE content. The modifications only have to be applied to undirected probe requests as the majority of transmitted probe requests and are the core element of the research on deanonymisation of probe requests. Protecting them is essential to assure user privacy. The reason for limiting the protection to undirected probe requests lies in the nature of directed probe requests, which are only transmitted in three scenarios: * Directly before connection establishment, * during an existing connection to ensure the connection is maintained with the AP with the strongest signal and * while searching for hidden networks. In the first two cases, the directed probe requests are sent using the LAA, which is typically either stable for an extended period of time per network, or unchanging <cit.>. The transmission of the LAA and that of the SSID serve as a good enough identifier to enable trivial device tracking, which makes the protection of IE content irrelevant in this scenario. Limiting the content of undirected probe requests has no impact on connection establishment or the connection itself: Both adhere to standards that do not rely on probe requests, using the established methods explained in <ref>. In the following, we investigate the impact a reduced IE content has on the exchange of capabilities with the AP. §.§ Impact on Functionality Connection establishment as described in <ref> requires four stages: (1) active or passive device discovery, (2) authentication, (3) association and (4) RSNA. CConnection establishment is also possible when using passive discovery only. Since passive discovery results in a connection just as efficient and stable as when using active discovery <cit.>, the capabilities that define the connection must be exchanged in another manner. When studying the IEEE Wi-Fi standard <cit.>, it becomes apparent that when the AP sends probe responses, as well as when the client sends the IEEE Std. 802.11 association request, ”security parameters“ are transmitted. A comparison of the fields contained in association requests <cit.> and probe requests <cit.> reveals that association requests and probe requests contain many overlapping fields, but while association requests contain 46 elements, probe requests contain only 34 elements. Out of these 34 elements, 15 are not present in association requests. They can be observed in <ref>. When looking at association requests, they contain elements negotiating FILS session activation and the use of S1G Relays. Therefore, their use must have been declared elsewhere when using passive discovery. An investigation of probe responses confirms this: of the 15 elements not present in association requests, 8 are transmitted in probe responses. In addition to 84 other fields, probe responses (and also beacons) contain a FILS Indication field, which explains why devices using passive discovery can be aware of FILS capabilities without having requested information on it. The fields which are exclusively present in probe requests are Request, Extended Request, and Vendor Specific Request, SSID List, FILS Request Parameters, PV1 Probe Response Option and Cluster Probe. The Cluster Probe element as well as the FILS Request Parameters are covered instead by the Extended Cluster Report field, respective FILS indication in probe responses and subsequent FILS parameters in the association frame. The SSID List element bundles multiple SSIDs that the device can connect to. In practice, requesting multiple SSIDs is instead done by sending separate probe requests for each SSID. The PV1 Probe Response Option <cit.> field is a bitmap of capabilities and compatibility, containing requests to respond with the capabilities the AP has. Most fields that can be requested via this bitmap are optionally present in probe responses and association requests. While this appears to be a useful field, connection establishment is possible, as well as efficient in passive discovery, without either the PV1 Probe Response Option or the Request, Extended Request, and Vendor Specific Request tags. Therefore, these fields appear to be unnecessary in the exchange of capabilities. In fact, to test the variability in probe responses with respect to the probe requests they respond to, we monitor probe responses in three different settings: in response to (1) scripted probe requests as used in <ref>, containing only an empty SSID field and Supported Rates, (2) those of the Intel 8265, and (3) those of a Raspberry Pi, with the longest IE field of all tested devices. RRegardless of the IE content of probe requests, the AP always responds with the same capabilities. To confirm this, we observe the probe responses of 10 different APs in the vicinity of our setup over an extended period of time, spanning probe responses to various passersby and a large range of devices, with the same result. We conjecture that even probe requests containing a reduced IE content will receive all necessary capabilities of the AP via its probe responses. Since all relevant information in probe requests are also exchanged via the combination of probe response and association request, and the IE content does influence probe responses, we conjecture that reducing the IE content to the bare minimum is feasible, and would reduce complexity and remove redundancy from wireless communication. To estimate the improvement that generic probe requests have on the anonymity of single users, we subsequently analyse and compare anonymity sets. §.§ Impact on Privacy: Anonymity Set Determination An anonymity set defines the number of users sharing the same identifiers, thus making them indistinguishable from one another. To estimate the privacy gain of generic probe requests, we evaluate the anonymity set size with reduced IE content like Vanhoef et al. in 2016 <cit.>. We use a subset of the Sapienza dataset <cit.>, containing probe requests recorded at a train station in 2013, containing 376117 probe requests. MAC address randomisation was first introduced in iOS 8 in 2014 <cit.>, and in Android 6.0 <cit.> in 2015. The dataset therefore contains very few probe requests that employ some form of MAC address protection. It is perfect for the anonymity set determination: To perform the same analysis on more recent data, we would first have to apply de-randomisation strategies to the data set, e.g. define an attack that maps probe requests to single devices. This has been done in several publications (cf. <ref>.1), but is out of scope for this paper. Using the Wireshark filter , we determine which probe requests originate from globally unique MAC addresses via the U/L bit (cf. <ref>), and find that 374736 are UAAs. These make up the pruned subset used for the subsequent evaluation. These probe requests originate from 14622 distinct MAC addresses. Since the dataset originates from before the introduction of MAC address randomisation, it is safe to assume that the number of distinct MAC addresses correlates with the number of devices. The results of our evaluation can be seen in <ref>: The graph shows that reduction of the IE to contain only Supported Rates results in 19 distinct anonymity sets. Around 17.45% of the probe requests are in 18 smaller anonymity groups containing one to about 1000 devices. The remaining group unifies 82.55% of the devices in one large anonymity set. All of these devices share the same Supported Rates 2, 4, 11, and 22. <ref> shows a reduction of the content of the IE to Supported Rates and DS Parameters. This reduction results in 61 anonymity sets, the largest of which now contains 5920 devices, which amounts to 40.49%. In 225508 cases, the DS Parameters were not contained within the probe request. In the remaining cases, the DS Parameter set contained channels 1, 2, 11, or 12. Figure <ref> shows the anonymity set size in case the IE contains both Supported Rates, DS Parameters, and HT capabilities. The distribution is spread out significantly over 276 sets and the largest anonymity set contains 25.78%, or 3770, of the devices. These results show that the less information is contained within probe requests, the larger the anonymity set most devices are contained in. It is therefore necessary to reduce and unify IE content as much as possible. To determine how much exactly, we regard prior attacks and the exact IE elements they use to fingerprint devices in the following. §.§ Impact on Security: Resistance to IE-Concerned Attacks To gauge the impact that a reduction of the IE content has on the security, a comparison of the fields used in IE-concerned attacks is performed in <ref>. The table highlights the fields particularly regarded in this publication, including the Supported Rates, the DS Parameter Set and the HT Capabilities in blue, purple and green. Additionally, the rows containing publications that selected specific fields due to their distinguishing features and high entropy are highlighted in grey. The conclusion that becomes apparent in this visual comparison is that no publication that selected fields depending on high entropy chose to include the Supported Rates. The publications that included the Supported Rates incorporated them not due to high entropy, but to maximise the fingerprint. This shows that the Supported Rates play only a minor role in identifying devices. This knowledge, in combination with our calculation of the anonymity set size of reduced IE content, show that removing all tags but the Supported Rates and the empty SSID field prevents existing attacks. This way, users are protected from device tracking via the IE contained in their probe requests, while ensuring that probe responses can still be received. § CONCLUSION Reducing the content of probe requests to the minimum while enlarging the anonymity set to the maximum has been evaluated from different perspectives: Our experiments showed, that the only IE tags necessary for probe requests to receive probe responses are the Supported Rates and SSID, both of which can be empty. We argue to unify the content of probe requests by transmitting only Supported Rates and the SSID. We furthermore showed that devices with probe requests containing the largest IE set do not necessarily correspond to the fastest connection establishment. On the other hand, while the very large size of directed probe requests sent by the Intel 8265 and its fast connection establishment do not necessarily have to correlate, it still poses the question whether large IEs in undirected probe requests serve a purpose at all, or whether a combination of generalised and very slim undirected probe requests and, upon identifying a known network, sending information-rich directed probe requests might improve both the privacy, as well as the connection establishment speed. We argue that other factors than the size of the IE must be the cause for the efficiency of the connection establishment, and that for the sake of increased user privacy, reducing the IE content in undirected probe requests to the minimum is a valid option. To evaluate the anonymity provided by such minimisation, we calculate the anonymity set size of the Sapienza train station data set and find out, that reducing IE content to contain only Supported Rates allows for 82.55% of the devices to be contained in the same anonymity set. This is a significant discovery that would protect a large number of users with very little effort. Altogether, generic probe requests as proposed in this paper inhibit attacks targeting the IE by reducing its content to the bare minimum and thereby protect user privacy. splncs04
http://arxiv.org/abs/2405.09085v1
20240515044226
Microscopic Investigation of Ground State Properties and Shape Evolution in Osmium Isotopes
[ "Usuf Rahaman", "M. Ikram", "Ishfaq A. Rather", "Anisul Ain Usmani" ]
nucl-th
[ "nucl-th" ]
Usuf Rahaman Department of Physics, Madanapalle Institute of Technology & Science, Madanapalle-517325, India. physics.usuf@gmail.com M.Ikram Department of Physics, Harsh Vidya Mandir (PG) College Raisi, Haridwar-247671, India. Ishfaq A. Rather Institut für Theoretische Physik, Goethe Universität, 60438 Frankfurt am Main, Germany. Anisul Ain Usmani Department of Physics, Aligarh Muslim University, Aligarh-202002, India Microscopic Investigation of Ground State Properties and Shape Evolution in Osmium Isotopes Usuf Rahaman M.Ikram Ishfaq A. Rather Anisul Ain Usmani Received: date / Accepted: date =============================================================================================== The present study focuses on investigating the shape evolution of neutron-rich even-even Osmium (Os) transitional nuclei within the range of neutron number N = 82 to N = 190. The investigation is conducted using density-dependent meson-nucleon and point-coupling models within the framework of the covariant density functional theory (CDFT). Additionally, the results obtained from the CDFT calculations are compared with those obtained using the relativistic mean-field model with a non-linear meson-nucleon interaction. The potential energy curve for Os isotopes (ranging from ^158Os to ^260Os) is analyzed in order to identify phase shape transitions, such as oblate-spherical-prolate. Furthermore, ground state bulk properties are calculated to gain insights into the structure of Os isotopes. The self-consistent calculations reveal a clear shape transition in the even-even Os isotopes, and overall, good agreement is observed among the different models employed as well as with the available experimental data. § INTRODUCTION The investigation of transitional nuclei is of paramount importance for the comprehensive understanding of nuclear structural properties, particularly with regard to their prolate and oblate shaped configurations. These nuclei, characterized by their dynamic and static effects, have become a focal point of both theoretical and experimental studies. Over the past few decades, various experiments <cit.> and theoretical models <cit.>, both relativistic and non-relativistic in nature, have been employed to study transitional nuclei with Z = 72-80. A numerous number of studies have documented the evolution of triaxial shapes in these isotopes, further enriching our knowledge in this field <cit.>. Furthermore, extensive investigations have observed the presence of superdeformed nuclei in this specific region, as evidenced by their non-spherical shapes <cit.>. The existence of deformed shapes in nuclei disrupts the inherent spontaneous symmetry and introduces non-sphericity, primarily governed by the quadrupole moment. Notably, certain nuclei in this region exhibit multiple deformations at nearly identical excitation energies, indicating the occurrence of shape co-existence. This phenomenon allows for the exploration of nuclear oscillations between different shapes. These distinct characteristics make the transitional nuclei within this region highly promising candidates for in-depth structural studies. It is worth noting that the most intriguing area within the nuclear transition range is believed to lie around the mass number A∼190, where the transformation between prolate and oblate shapes has been thoroughly investigated <cit.>. The region comprising transitional neutron-rich nuclei has also captured significant attention within the nuclear physics community due to its implications for neutron magicity <cit.>. The isotopes of osmium play a significant role in the transitional region, where the nuclear shape undergoes a transition from spherical to prolate/oblate configurations, and subsequently returns to a spherical shape as the neutron number increases. Nuclei within this region provide a suitable framework for a critical examination of shape evolution. Specifically, we focus our investigation on the even-even osmium isotopes (^158-260Os), characterized by a high neutron count. Thus, our study expands our understanding from the beta stable region to the neutron-drip line region. It is important to note that pairing correlation is of utmost importance in these loosely bound nuclei, which exhibit intriguing phenomena such as neutron halo and skin <cit.>. Consequently, an explicit treatment of pairing correlations is essential when describing nuclei near the drip line. Density Functional Theory (DFT) is a highly versatile and powerful approach employed to evaluate various physical observables of nuclei across the entire periodic table, utilizing both relativistic and non-relativistic functionals <cit.>. This framework not only addresses the bulk properties but also enables the investigation of transitional behavior in nuclei. Notably, few studies have reported calculations on nuclear shape transitions employing relativistic functionals <cit.>. Additionally, self-consistent Hartree-Fock calculations based on Skyrme energy density functionals have been conducted <cit.>. Furthermore, even-even osmium isotopes have been investigated within the framework of Hartree-Fock Bogoliubov using Skyrme interactions <cit.>. Despite the considerable theoretical considerations in Covariant Density Functional Theory (CDFT), there remains a scarcity of research concerning shape evolution in transitional nuclei. Consequently, our current endeavor focuses on employing CDFT to explore the spherical to prolate/oblate shape transitions in osmium transitional nuclei. Our primary objective is to attain a more comprehensive and systematic understanding of these nuclear isotopes within both relativistic and non-relativistic functionals. The calculations entail Relativistic Mean-Field (RMF) theory <cit.> incorporating nonlinear meson-nucleon NL3* <cit.> interaction with BCS pairing, as well as covariant density functional theory (CDFT) <cit.> utilizing density-dependent meson-exchange (DD-ME) <cit.> and density-dependent point-coupling interactions (DD-PC) <cit.> with RHB pairing. Within this paper, we employ these methodologies to provide valuable insights into the nuclear shape transitions exhibited in the even-even isotopic chain of osmium, encompassing the mass range from ^158Os to ^260Os. As previously indicated, this mass region offers a captivating opportunity for exploring neutron magic numbers and investigating small islands of phase shape transition, which could hold energetic favorability for reaction mechanisms. This investigation yields valuable insights into various nuclear properties, including ground state energy, quadrupole deformation, radii, nuclear skin, and potential energy curve. It is worth noting that while triaxial symmetry plays a crucial role in these nuclei, our study focuses exclusively on axially deformed cases. The organization of this paper is as follows. Section 2 provides a comprehensive overview of the theoretical formalism employed, along with detailed explanations of the calculations employed in this study. The results of these calculations, along with their pertinent discussions, are presented in Section 3. Finally, Section 4 concludes the paper with a summary of the key findings. § THEORETICAL FRAMEWORK §.§ Covariant density functional theory In this paper, we employ two classes of covariant density functional models, namely the density-dependent meson-exchange (DD-ME) model and the density-dependent point-coupling (DD-PC) model. These models differ primarily in their treatment of interaction range. The DD-ME model incorporates a finite interaction range, whereas the DD-PC model utilizes a zero-range interaction supplemented by an additional gradient term in the scalar-isoscalar channel. By employing these distinct models, we aim to comprehensively explore the impact of interaction range on the results and implications of our study. §.§.§ The meson-exchange model Within the meson-exchange model, the nucleus is depicted as a system comprising Dirac nucleons that interact through the exchange of mesons with finite masses, resulting in interactions of finite range. <cit.>. These mesons, namely the isoscalar-scalar σ meson, the isoscalar-vector ω meson, and the isovector-vector ρ meson, constitute the essential set of meson fields required for a comprehensive quantitative description of nuclei. The meson-exchange model is characterized by the utilization of a standard Lagrangian density <cit.> that incorporates vertices with medium-dependent interactions: L = ψ̅[ γ(i∂-g_ωω-g_ρρ⃗ τ⃗-eA)-m-g_σσ] ψ + 1/2(∂σ)^2-1/2m_σ^2σ^2 -1/4Ω_μνΩ^μν+1/2m_ω^2ω ^2 - 1/4R⃗_μνR⃗^μν+1/2m_ρ ^2ρ⃗^ 2-1/4F_μνF^μν. In equation (<ref>), the symbol ψ represents the Dirac spinors, while m denotes the mass of the nucleon. The variable e corresponds to the charge of the proton, which is zero for neutrons. The quantities m_σ, m_ω, and m_ρ represent the masses of the σ meson, ω meson, and ρ meson, respectively. The corresponding coupling constants for the interaction of these mesons with the nucleons are denoted as g_σ , g_ω, and g_ρ. These coupling constants, along with the unknown meson masses, serve as parameters in the Lagrangian equation. The symbols Ω^μν, R⃗^μν and F^μν refer to the field tensors associated with the vector fields ω, ρ and the proton, respectively. Ω^μν=∂^μω^ν-∂^νω^μ, R⃗^μν=∂^μρ⃗^ν-∂^νρ⃗^μ, F^μν=∂^μA^ν-∂^νA^μ. The initial formulation of this linear model was proposed by Walecka <cit.>. However, this simplistic model lacks the capability to provide a quantitative depiction of nuclear systems  <cit.> due to its linear interaction terms in the meson fields. Notably, it exhibits a considerably exaggerated incompressibility for infinite nuclear matter <cit.> and insufficiently small nuclear deformations <cit.>. To address these limitations and achieve a more realistic portrayal of complex nuclear system properties, one can incorporate either a nonlinear self-coupling or density dependence in the coupling constants. For the inclusion of nonlinear self-coupling, an additional term needs to be incorporated into the Lagrangian: U(σ) = 1/2m_σ^2σ^2+1/3g_2σ ^3+1/4g_3σ^4. This model has demonstrated successful applications in various studies <cit.>. When it comes to density-dependent coupling constants, the dependence is defined as: g_i(ρ) = g_i(ρ_ sat)f_i(x) for i=σ, ω, ρ Here, i can represent any of the three mesons: σ, ω and ρ. Notably, there are no nonlinear terms present for the σ meson, which implies that g_2 = g_3 =0. The density dependence or σ and ω mesons are determined by the function: f_i(x)=a_i1+b_i(x+d_i)^2/1+c_i(x+d_i)^2. For the ρ meson, the density dependence is given by: f_ρ(x)=exp(-a_ρ(x-1)). In the above equations, x represents the ratio between the baryonic density ρ at a specific location and the baryonic density at saturation ρ_ sat in symmetric nuclear matter. To ensure consistency, the parameters in Eq. (<ref>) are constrained as follows: f_i(1)=1, f_σ^”(1)=f_ω^”(1), and f_i^”(0)=0. These constraints effectively reduce the number of independent parameters for the density dependence to three. In this study, the DD-ME1 <cit.> and DD-ME2 <cit.> density-dependent meson-exchange relativistic energy functionals are utilized, and their details can be found in Table <ref>. §.§.§ Point-coupling model The Lagrangian associated with the density-dependent point coupling model <cit.> can be expressed as L = ψ̅(iγ·∂-m) ψ - 1/2α_S(ρ̂)(ψ̅ψ)(ψ̅ψ) -1/2α_V(ρ̂)(ψ̅γ^μψ)(ψ̅γ_μψ) - 1/2α_TV(ρ̂)(ψ̅τ⃗γ^μψ) (ψ̅τ⃗γ_μψ) - 1/2δ_S(∂_vψ̅ψ) (∂^vψ̅ψ) - eψ̅γ· A (1 - τ_3)/2ψ. The Lagrangian encompasses various components crucial for the density-dependent point coupling model. It incorporates the free-nucleon Lagrangian, point coupling interaction terms, and the coupling between protons and the electromagnetic field. The derivative terms present in Eq. (<ref>) serve to account for the dominant effects originating from finite-range interactions, which hold significance in the context of nuclei. Similar to DD-ME models, this particular formulation incorporates interactions involving isoscalar-scalar, isoscalar-vector, and isovector-vector couplings. In this investigation, the density-dependent point-coupling interactions, namely DD-PC1 <cit.> and DD-PCX <cit.>, were adopted. These interactions are presented in Table <ref>. §.§ Axially deformed relativistic mean-field model The relativistic mean field (RMF) theory has emerged as a highly successful approach in addressing the nuclear many-body problem and elucidating various nuclear phenomena across the entire range of the periodic table, from the proton drip line to the neutron drip line region <cit.>. One notable advantage of the RMF theory, compared to its non-relativistic counterpart, is its inherent ability to naturally reproduce the spin-orbit splitting. This feature plays a pivotal role in explaining the formation of closed shell structures in atomic nuclei <cit.>. The foundation of the RMF theory lies in a fundamental Lagrangian density that encompasses nucleons interacting with σ-, ω- and ρ-meson fields. Additionally, the inclusion of the photon field A_μ is crucial for adequately accounting for the Coulomb interaction among protons. The Lagrangian density of the relativistic mean field theory can be expressed as <cit.>, L = ψ̅_̅i̅{iγ^μ∂_μ-M}ψ_i +1/2∂^μσ∂_μσ -1/2m_σ^2σ^2 -1/3g_2σ^3 - 1/4g_3σ^4-g_sψ̅_̅i̅ψ_iσ -1/4Ω^μνΩ_μν+1/2m_w^2V^μV_μ - g_wψ̅_iγ^μψ_i V_μ-1/4B⃗^μνB⃗_μν +1/2m_ρ^2R⃗^μR⃗_μ-1/4F^μνF_μν - g_ρψ̅_iγ^μτ⃗ψ_iR⃗^⃗μ⃗-eψ̅_iγ^μ(1-τ_3i)/2ψ_iA_μ . Here, the masses of the nucleons, σ-, ω- and ρ-mesons are represented by M, m_σ, m_ω and m_ρ, respectively. The fields associated with the σ-, ω- and ρ-mesons are denoted by σ, V_μ and R_μ. The coupling constants for the σ-, ω- and ρ-mesons, as well as the electromagnetic interaction, are given by g_s, g_ω, g_ρ and e^2/4π=1/137, respectively. Additionally, the self-interaction coupling constants for σ-mesons are denoted by g_2 and g_3. By employing the classical variational principle, the field equations governing the behavior of nucleons and mesons can be derived. Specifically, the Dirac equation for nucleons, which encapsulates their dynamics, can be written as {-iα▽ + V(r_⊥,z)+β M^†}ψ_i=ϵ_iψ_i. Where M^†=M+S(r_⊥,z)=M+g_σσ^0(r_⊥,z), and V(r_⊥,z)=g_ωV^0(r_⊥,z)+g_ρτ_3R^0(r_⊥,z)+ e(1-τ_3)/2A^0(r_⊥,z). The Klein-Gordon equations for mesons are given by {-△+m^2_σ}σ^0(r_⊥,z) = -g_σρ_s(r_⊥,z) - g_2σ^2(r_⊥,z)-g_3σ^3(r_⊥,z) , {-△+m^2_ω}V^0(r_⊥,z) = g_ωρ_v(r_⊥,z) , {-△+m^2_ρ}R^0(r_⊥,z) = g_ρρ_3(r_⊥,z) , -△ A^0(r_⊥,z) = eρ_c(r_⊥,z). The scalar and vector densities for the σ- and ω-fields are denoted as ρ_s and ρ_v, respectively. These densities represent the distribution and magnitude of the σ-meson and ω-meson fields within the nuclear environment. Mathematically, these densities are expressed as ρ_s(r) = ∑_ i=n,pψ̅_i(r)ψ_i(r) , ρ_v(r) = ∑_i=n,pψ^†_i(r)ψ_i(r) . The vector density ρ_3(r) for ρ-field and charge density ρ_c(r) are expressed by ρ_3(r) = ∑_i=n,pψ^†_i(r)γ^0τ_3iψ_i(r) , ρ_c(r) = ∑_i=n,pψ^†_i(r)γ^0(1-τ_3i)/2ψ_i(r) . To describe the fundamental characteristics of nuclei in their ground state, a static solution is obtained by solving the equations of motion. These equations, which are nonlinear and coupled, capture the interactions and dynamics of nucleons and mesons within the nuclear system. The solution is achieved through a self-consistent approach, where the equations are solved iteratively to ensure internal consistency. In this particular study, an axially deformed harmonic oscillator basis is employed for both fermions and bosons. The basis size is chosen as N_F = N_B = 20, indicating the number of states included in the basis for fermions and bosons, respectively. One of the important quantities of interest in this study is the calculation of radii. These radii are determined from the corresponding densities obtained from the solution. The densities provide information about the spatial distribution and size of the nuclear system, and the radii serve as quantitative measures of these distributions. The rms radii of proton (r_p), neutron (r_n) and nuclear matter (r_m) are given by, ⟨ r_p^2⟩ = 1/Z∫ r_p^2d^3rρ_p , ⟨ r_n^2⟩ = 1/N∫ r_n^2d^3rρ_n , ⟨ r_m^2⟩ = 1/A∫ r_m^2d^3rρ , The rms charge radius of a nucleus can be determined by utilizing the proton rms radius and accounting for the finite size of the proton. This is achieved by employing the relation r_c = √(r_p^2 + 0.64). To extract the quadrupole deformation parameter (β_2) of the nucleus, the calculated quadrupole moments of neutrons and protons are used as Q = Q_n + Q_p = √(16π/5)(3/4π AR^2β_2), where R = 1.2A^1/3. The total energy of the nuclear system is composed of various contributions. E_total = E_part+E_σ+E_ω+E_ρ+E_c+E_pair+E_c.m., Firstly, the sum of single particle energies of the nucleons is denoted as E_part. Additionally, there are contributions from the meson fields, namely E_σ, E_ω, and E_ρ. The Coulomb field contributes to the energy as E_c, and the pairing energy is represented by E_pair. Furthermore, the center-of-mass energy is denoted as E_cm. The non-linear NL3* parameter set <cit.>, which is a modified version of the successful NL3 parameter set <cit.>, is utilized throughout the calculations. This parameter set provides the necessary parameters for the model being employed. For a detailed understanding of the formalism and numerical techniques used, references such as Refs. <cit.> and other relevant literature should be consulted. In this study, pairing correlations are treated using the BCS (Bardeen-Cooper-Schrieffer) approach <cit.>. Although the BCS approach may not be appropriate for light neutron-rich nuclei, the nuclei considered in this study are not in that category, ensuring the reliability of the results obtained within the framework of the relativistic mean field (RMF) theory. § RESULTS AND DISCUSSION §.§ Binding energy The binding energy is a crucial parameter for studying the stability of atomic nuclei. The binding energies of ground states for even-even osmium isotopes ^160-264Os is shown in Table <ref> and plotted in Fig. <ref>. These energies are calculated within the CDFT with effective density-dependent interactions DD-ME1 <cit.>, DD-ME2 <cit.>, DD-PC1 <cit.>, and DD-PCX <cit.> and also with RMF+BCS model using nonlinear NL3* <cit.>. Additionally, Fig. <ref> depicts the calculated binding energies for these isotopes. Since experimental data for neutron-rich drip-line nuclei is limited, it is necessary to compare the results obtained from different theoretical approaches <cit.> in order to assess the model's predictability and reliability. Comparisons were made with other theoretical frameworks, the HFB + THO approach in conjunction with various nuclear interactions, including Skyrme SLy4, SkP, and SkM* <cit.>, alongside CHFB+5DCH calculations predicated on the Gogny D1S interaction <cit.>, which exhibited excellent agreement with both RMF and CDFT outcomes, extending towards the neutron drip-line region. The calculated binding energies exhibit a consistent agreement across the entire isotopic range. Particularly, the results obtained using the SkP functional demonstrate a strong alignment with our findings, extending even to the neutron dripline region. Conversely, the SkM* investigations indicate that isotopes in the neutron-rich region are slightly more tightly bound than our predictions, whereas the SLy4 and CHFB data suggest the opposite trend. The results obtained within the study range displayed good consistency among various relativistic and non-relativistic functionals, as well as with available experimental data <cit.>. In general, the binding energy per nucleon (BE/A) increases as the number of neutrons (or the mass number A) increases, reaching a maximum value at A = 182 for nearly all interactions. This trend is consistent with experimental observations. Consequently, it is concluded that the isotope ^182Os is the most stable among the entire isotopic chain. ccccccccccc Comparison of the total binding energies (in MeV) of the ground states for ^158-266Os isotopes calculated using different effective interactions (DD-ME1 <cit.>, DD-ME2 <cit.>, DD-PC1 <cit.>, DD-PCX <cit.>, and NL3* <cit.>) and with available experimental data <cit.>. The results are also compared with calculations using HFB models. <cit.> Nuclei DD-ME1 DD-ME2 DD-PC1 DD-PCX NL3* EXP SkP SkM* SLy4 CHFB ^156Os 1190.90 1188.57 1188.82 1186.16 1195.96 – – – – – ^158Os 1221.14 1218.82 1219.25 1216.94 1224.09 – – – – – ^160Os 1242.88 1240.44 1240.66 1237.69 1246.55 – – – – 1245.14 ^162Os 1265.02 1262.52 1262.49 1259.15 1268.13 1262.47 1252.25 1261.62 1261.88 1265.90 ^164Os 1286.78 1284.22 1284.14 1280.62 1288.56 1284.66 1273.42 1282.98 1283.02 1286.93 ^166Os 1307.78 1305.18 1305.20 1301.58 1310.12 1305.81 1293.73 1303.71 1303.37 1308.28 ^168Os 1328.00 1325.39 1325.49 1321.88 1329.82 1326.52 1313.80 1323.77 1322.99 1328.20 ^170Os 1347.54 1344.97 1345.12 1341.43 1348.75 1346.59 1333.75 1343.28 1341.93 1347.64 ^172Os 1366.34 1363.84 1364.13 1360.42 1368.62 1366.05 1353.94 1362.30 1361.27 1366.82 ^174Os 1386.15 1383.84 1384.43 1380.52 1387.63 1384.95 1373.24 1380.82 1380.18 1385.35 ^176Os 1404.50 1402.21 1402.98 1398.95 1405.72 1403.19 1391.97 1399.12 1398.38 1403.27 ^178Os 1422.06 1419.87 1420.77 1416.50 1423.13 1420.78 1410.08 1416.62 1415.36 1420.48 ^180Os 1439.12 1437.10 1438.14 1433.66 1439.83 1437.74 1427.62 1432.87 1431.77 1436.86 ^182Os 1455.14 1453.24 1454.54 1450.31 1455.48 1454.13 1444.31 1448.76 1447.59 1452.62 ^184Os 1470.02 1468.38 1469.83 1465.75 1470.21 1469.92 1460.47 1464.21 1462.69 1467.84 ^186Os 1484.24 1482.67 1484.35 1480.52 1484.21 1484.81 1475.92 1479.21 1477.43 1482.32 ^188Os 1497.97 1496.49 1498.37 1494.75 1497.51 1499.09 1490.77 1493.75 1491.72 1496.23 ^190Os 1511.24 1509.79 1512.04 1508.03 1510.40 1512.80 1505.47 1507.80 1505.03 1510.03 ^192Os 1524.36 1523.05 1525.58 1521.39 1523.30 1526.12 1520.10 1521.30 1517.78 1523.00 ^194Os 1537.08 1535.96 1538.79 1534.63 1535.59 1538.81 1532.93 1534.25 1530.09 1535.38 ^196Os 1548.13 1546.99 1549.80 1545.42 1546.58 1550.80 1546.97 1546.91 1542.18 1547.07 ^198Os 1559.81 1558.63 1562.16 1557.54 1557.50 1562.42 1559.72 1559.34 1554.55 1558.74 ^200Os 1571.93 1570.99 1574.68 1570.10 1569.11 1573.60 1572.78 1571.95 1567.04 1569.47 ^202Os 1584.15 1583.26 1586.67 1582.11 1580.48 1584.08 1585.78 1584.52 1579.54 1578.59 ^204Os 1588.93 1587.96 1591.75 1586.29 1585.59 – 1592.33 1591.39 1583.78 1585.21 ^206Os 1593.82 1592.82 1596.97 1590.65 1590.68 – 1599.08 1598.10 1588.35 1590.07 ^208Os 1599.51 1598.52 1602.80 1595.96 1596.84 – 1605.54 1604.32 1593.12 1595.06 ^210Os 1605.24 1604.25 1608.80 1601.36 1602.87 – 1611.54 1610.04 1597.95 1599.77 ^212Os 1610.77 1609.79 1614.62 1606.59 1608.61 – 1617.86 1615.30 1602.71 1604.35 ^214Os 1615.99 1615.04 1620.27 1611.64 1613.94 – 1624.44 1620.69 1607.33 1608.89 ^216Os 1621.26 1620.51 1626.77 1617.35 1619.07 – 1631.21 1625.98 1612.50 1613.49 ^218Os 1627.01 1626.37 1633.04 1623.12 1626.20 – 1638.37 1631.13 1617.79 1618.23 ^220Os 1632.42 1631.93 1638.97 1628.56 1632.46 – 1645.16 1636.12 1623.10 1622.56 ^222Os 1637.33 1636.88 1644.47 1633.31 1637.72 – 1651.58 1640.93 1627.78 1626.58 ^224Os 1641.91 1641.56 1649.75 1637.93 1642.84 – 1657.50 1645.53 1631.78 1630.19 ^226Os 1646.11 1645.91 1654.68 1642.38 1647.76 – 1662.68 1649.70 1635.18 1633.41 ^228Os 1649.73 1649.59 1658.58 1645.67 1651.93 – 1667.79 1653.45 1637.98 1636.26 ^230Os 1652.87 1652.81 1662.17 1648.72 1655.84 – 1672.43 1656.77 1640.77 1638.72 ^232Os 1655.57 1655.63 1665.50 1651.54 1659.12 – 1676.94 1659.83 1643.70 1640.82 ^234Os 1657.97 1658.14 1668.65 1654.15 1661.80 – 1681.53 1662.57 1646.51 1642.62 ^236Os 1660.15 1660.46 1671.60 1656.67 1664.31 – 1685.71 1664.93 1647.93 1644.08 ^238Os 1661.99 1662.41 1673.95 1658.61 1666.48 – 1688.90 1667.03 1649.45 1645.18 ^240Os 1663.52 1664.06 1675.72 1659.90 1668.69 – 1691.94 1668.93 1650.62 1645.92 ^242Os 1664.84 1665.50 1677.23 1660.92 1671.16 – 1694.86 1670.64 1651.56 1646.40 ^244Os 1665.83 1666.62 1678.42 1661.63 1673.39 – 1697.80 1672.13 1652.42 1646.64 ^246Os 1666.37 1666.70 1679.32 1659.78 1673.96 – 1699.14 1673.52 1653.30 1646.64 ^248Os 1667.41 1667.86 1680.58 1660.86 1676.05 – 1702.17 1674.74 1653.70 1646.45 ^250Os 1668.14 1668.70 1681.87 1661.85 1677.29 – 1704.83 1675.79 1653.35 1646.40 ^252Os 1668.73 1669.40 1682.47 1662.74 1678.96 – 1707.58 1676.59 1653.85 1645.91 ^254Os 1669.50 1670.41 1683.59 1663.19 1680.47 – 1710.32 1677.30 1654.60 1645.38 ^256Os 1670.38 1671.36 1685.00 1664.01 1682.30 – 1712.08 1678.31 1654.66 1644.68 ^258Os 1671.52 1672.69 1686.70 1665.42 1684.30 – 1714.25 1679.77 1655.17 1643.74 ^260Os 1672.75 1674.18 1688.43 1666.65 1684.52 – 1717.44 – – 1641.23 ^262Os 1669.92 1671.30 1684.88 1662.41 1682.07 – 1703.71 – – 1638.69 ^264Os 1667.03 1668.37 1681.31 1658.08 1680.20 – 1703.65 – – – ^266Os 1664.10 1665.39 1677.72 1653.66 1678.60 – – – – – §.§ Quadrupole deformation Nuclear shape and deformation are significant physical parameters that play a crucial role in determining properties such as nuclear size, isotopic shift, and quadrupole moments. Table <ref> provides the quadrupole deformation parameters β_2) obtained from various relativistic and non-relativistic interactions. To provide a more clear understanding of the shape transition, Fig. <ref> presents the variation deformation parameter with neutron number. In order to assess the shape transition more comprehensively, we have compared the results obtained from the Relativistic Mean Field (RMF) approach with those obtained from non-relativistic functionals such as Skyrme SkP, SLy4, and SkM* <cit.>, as well as CHFB+5DCH <cit.> calculations based on the Gogny D1S interaction. These results were also compared with available experimental data <cit.>. Throughout the considered isotopic chain, spanning from the neutron-deficient to the neutron-rich region, transitions in shape from spherical to prolate, prolate to oblate, and then oblate to spherical were observed. This indicates a transitional behavior of osmium isotopes. In Os nuclei ranging from N = 112 to N = 124, both the experimental data and our calculated results based on DD-ME1, DD-ME1, DD-PC1, DD-PCX, and NL3* exhibit consistent trends. However, deviations arise when comparing these trends with predictions derived from HFB theory utilizing SKM* functionals. It is important to note that nuclei near the drip-line region exhibit a spherical shape. cccccccccc Comparison of the Quadrupole Deformation Parameter (β_2) for the ground states of ^158-266Os isotopes calculated using different effective interactions (DD-ME1, DD-ME2, DD-PC1, DD-PCX, and NL3*). The results are compared with available experimental data <cit.> and calculations performed using HFB models <cit.> Nuclei DD-ME1 DD-ME2 DD-PC1 DD-PCX NL3* EXP SkP SkM* SLy4 ^156Os 0.067 0.070 0.043 0.064 -0.077 – – – – ^158Os 0.000 0.000 0.000 0.000 0.000 – – – – ^160Os -0.065 -0.067 -0.058 -0.060 0.000 – – – – ^162Os 0.110 0.111 0.107 0.114 0.077 – -0.085 0.074 0.112 ^164Os 0.134 0.134 0.136 0.145 -0.092 – 0.141 0.124 0.145 ^166Os 0.157 0.157 0.158 0.164 0.139 – 0.166 0.150 0.165 ^168Os 0.167 0.168 0.171 0.177 0.164 – 0.187 0.169 0.179 ^170Os 0.173 0.174 0.181 0.190 0.187 – 0.208 0.186 0.195 ^172Os 0.185 0.187 0.196 0.208 0.305 0.224 0.266 0.201 0.278 ^174Os 0.327 0.326 0.326 0.319 0.323 0.246 0.284 0.218 0.294 ^176Os 0.338 0.336 0.337 0.331 0.331 – 0.288 0.260 0.302 ^178Os 0.332 0.331 0.334 0.332 0.329 0.246 0.276 0.265 0.300 ^180Os 0.322 0.320 0.322 0.307 0.319 0.248 0.256 0.242 0.278 ^182Os 0.303 0.299 0.302 0.284 0.304 0.236 0.240 0.225 0.251 ^184Os 0.286 0.284 0.285 0.263 0.287 0.208 0.226 -0.188 0.236 ^186Os 0.264 0.262 0.261 0.240 0.269 0.200 0.214 -0.176 0.223 ^188Os 0.243 0.241 0.234 0.226 0.245 0.184 0.200 -0.166 0.207 ^190Os 0.190 0.190 0.191 0.193 0.202 0.177 0.184 -0.157 0.187 ^192Os 0.167 0.167 0.168 0.167 0.174 0.164 0.166 -0.145 0.162 ^194Os 0.149 0.149 0.150 0.148 0.150 – -0.150 -0.129 -0.143 ^196Os 0.121 0.122 0.119 0.119 0.124 – -0.132 -0.111 -0.122 ^198Os -0.092 -0.092 -0.094 -0.094 -0.095 – -0.104 -0.088 -0.096 ^200Os -0.072 -0.070 -0.072 -0.070 -0.064 – -0.066 0.000 -0.061 ^202Os 0.000 0.000 0.000 0.000 0.000 – 0.000 0.000 0.000 ^204Os 0.000 0.000 0.000 0.000 0.002 – 0.000 0.000 0.000 ^206Os -0.055 -0.057 -0.050 -0.055 0.071 – -0.033 0.000 0.024 ^208Os 0.109 0.110 0.100 0.106 0.117 – -0.046 0.000 0.076 ^210Os 0.135 0.135 0.127 0.130 0.144 – -0.064 0.000 0.120 ^212Os 0.155 0.154 0.148 0.149 0.163 – 0.134 0.116 0.143 ^214Os 0.170 0.169 0.182 0.170 0.178 – 0.165 0.142 0.166 ^216Os 0.252 0.253 0.251 0.242 0.189 – 0.188 0.162 0.217 ^218Os 0.279 0.278 0.275 0.268 0.299 – 0.234 0.180 0.243 ^220Os 0.299 0.299 0.293 0.286 0.326 – 0.252 0.196 0.258 ^222Os 0.305 0.303 0.300 0.290 0.329 – 0.262 0.217 0.266 ^224Os 0.311 0.308 0.307 0.294 0.332 – 0.265 0.236 0.270 ^226Os 0.318 0.315 0.313 0.300 0.338 – 0.263 0.240 0.270 ^228Os 0.319 0.316 0.313 0.300 0.339 – 0.255 0.238 0.267 ^230Os 0.318 0.314 0.312 0.299 0.340 – 0.245 0.233 0.263 ^232Os 0.311 0.308 0.304 0.292 0.339 – 0.237 0.225 0.256 ^234Os 0.296 0.293 0.289 0.275 0.329 – 0.230 0.216 0.245 ^236Os 0.281 0.276 0.275 0.259 0.310 – 0.220 0.207 0.231 ^238Os 0.266 0.263 0.263 0.248 0.292 – 0.208 0.197 0.217 ^240Os 0.252 0.249 0.249 0.234 0.274 – 0.194 0.187 0.205 ^242Os 0.239 0.236 0.235 0.219 0.260 – 0.182 0.177 0.192 ^244Os 0.224 0.222 0.219 0.204 0.251 – 0.170 -0.151 0.177 ^246Os -0.150 -0.151 0.186 -0.157 0.229 – -0.162 -0.141 0.162 ^248Os -0.137 -0.137 0.157 -0.142 -0.147 – -0.150 -0.131 0.146 ^250Os -0.120 -0.121 0.138 -0.130 -0.128 – -0.134 -0.120 0.125 ^252Os -0.100 -0.101 0.113 -0.117 -0.104 – -0.119 -0.106 -0.115 ^254Os 0.083 0.086 -0.097 0.084 -0.094 – -0.102 -0.088 -0.100 ^256Os 0.058 0.062 -0.066 0.018 0.085 – -0.079 0.000 -0.074 ^258Os 0.000 0.000 0.000 0.000 0.069 – -0.039 0.000 0.000 ^260Os 0.000 0.000 0.000 0.000 0.031 – 0.000 – – ^262Os 0.002 0.002 0.001 0.004 0.058 – 0.450 – – ^264Os 0.005 0.005 0.003 0.006 0.093 – 0.447 – – ^266Os 0.008 0.008 0.006 0.007 0.125 – – – – §.§ Two-neutron separation energy (S_2n), shell gap δ S_2n and neutron pairing energy (E_pair,n) The identification of shell closures in atomic nuclei is crucial for understanding nuclear structure. Two important observables used in this regard are the two-neutron separation energy(S_2n) and the two-neutron shell gap (δ S_2n). The δ S_2n, also known as the S_2n differential, is calculated using the following relation: δ S_2n = S_2n(N,Z) - S_2n(N+2,Z)/2 We present S_2n, and δ S_2n for even-even ^158-266Os isotopes in Fig. <ref>. To validate our calculations, we compare our results with available experimental data and also with predictions from other theoretical models. Specifically, we compare our Relativistic Mean Field (RMF) values with those obtained from the HFB+THO model using Skyrme SLy4, SkP, and SkM* functionals <cit.>, as well as the CHFB+5DCH model based on the Gogny D1S interaction <cit.>. In Fig. <ref>, we observe a sudden decrease in the two-neutron separation energy (S_2n) and pronounced kinks in the δ S_2n at neutron numbers N=82, N=126, and N=184. These findings are consistent with the long-established neutron shell closure at N=82, N=126, and N=184. Additionally, we note the presence of kinks at N=118 and N=168 in the relativistic interactions (DD-ME1, DD-ME2, DD-PC1, DD-PCX, NL3*), suggesting a possible neutron shell or subshell closure at these neutron numbers. However, non-relativistic functionals such as SkP, SkM*, SLy4, and CHFB do not exhibit such kinks at N=118 and and N=168. Previous research has proposed the existence of several deformed subshell closures <cit.>. The idea of neutron shell closure based on the disappearing neutron pairing energy <cit.>. Supporting the notion of shell closures, the neutron pairing energy (E_pair,n) vanishes at neutron numbers 82, 126, and 184, as shown in Fig. <ref>. We found that E_pair,n becomes zero at N = 118, matching the expected shell behavior seen in Fig. <ref> with S_2n and δ S_2n. However, at N = 168, E_pair,n does not reach zero, which challenges the idea of a shell closure at this neutron number. This discrepancy suggests that the shell structure at N = 168 is different from what is typically seen at closed neutron shells. This highlights the complexity of nuclear shell evolution, especially in heavy, neutron-rich nuclei, and suggests the need for further study. These observations provide valuable insights into the nuclear structure of Os isotopes and contribute to our understanding of shell closures in atomic nuclei. §.§ Neutron, proton and charge radii Nuclear radii provide crucial information about the size and distribution of neutrons and protons within a nucleus. In particular, the neutron skin thickness, which is the difference between the neutron and proton radii, becomes significant in neutron-rich nuclei where an excess of neutrons is present. The neutron skin thickness is directly related to fundamental properties of nuclear matter and helps bridge the understanding of finite nuclear systems to infinite nuclear matter <cit.>. In this subsection, we aim to investigate the correlation between nuclear shape and the distribution of nuclear matter. We compare theoretical results obtained from different models and also compare them with available experimental data. The charge radii (r_c) are presented in Fig. <ref>, while the root-mean-square (RMS) nucleon radii (r_n and r_p) and the neutron skin thickness (Δ r = r_n - r_p) for the considered isotopic chains are shown in Fig. <ref>. For even-even Os isotopes, the neutron radius exhibits an increasing trend with an increasing number of neutrons. The dips observed at neutron numbers N=82, 126, and 184 indicate the closed shell behavior of these nuclei. Due to variations in deformation, the increments in different radii for even-even Os isotopes are not entirely smooth. We compare our calculated values of the RMS charge radii (r_c) with the available experimental data <cit.>. The comparison reveals a good agreement between our theoretical results and experimental measurements. Fig. <ref> demonstrates that the neutron skin thickness (Δ r) monotonically increases with the number of neutrons. This observation further supports our understanding of the relationship between neutron-rich nuclei and their matter distribution. §.§ Potential energy curve The shape of a nucleus is one of its most essential as well as fundamental features, described by quadrupole deformation. Most of the nuclei exhibit the spherical as well as ellipsoid shape. In axially symmetric case, a deformed nucleus is classified by quadrupole deformation parameter β_2. Here, we employed CDFT with density-dependent effective interactions DD-ME1 <cit.>, DD-ME2 <cit.>, DD-PC1 <cit.>, and DD-PCX <cit.> to perform quadrupole-constrained calculations for examining the Potential energy curves for even-even Os chain. The potential energy curves for the considered isotopic chain are plotted in Figs. <ref>- <ref>. One can see that the results do not differ from density-dependent meson exchange model to point coupling model. It can also be noted that the location of spherical, oblate, and prolate minima appears at the same deformation, no matter what the force or model is used. However, the relative energies can be slightly different depending on the models. In general, the outcomes of energy barriers for deformations through all the effective interactions are found to be comparable. The ground state of ^158Os is revealed to be spherical confirming the spherical form as expected at well-known magic neutron number N = 82. As the neutron number increases, the global minima move from spherical to prolate region. ^162Os and ^164Os seem to exist in both slightly prolate and oblate shapes as the corresponding minima have an energy difference of about 1.0 MeV or less. In ^166-194Os isotopes, we see oblate shapes with an excited state at about 2.5 MeV higher than the global minima. Additionally, Figs. <ref> clearly shows that there is a very strong competition between different low-lying configurations corresponding to different intrinsic deformations, and this situation of a nucleus is described by shape coexistence. Here, in considered nuclei, ^196Os nucleus has two separate minima at β_2=0.1 (prolate) and β_2=-0.1 (oblate) with a small amount of energy difference about 0.5 MeV. It is worth mentioning that a clear shape evolution from spherical to prolate as well as oblate shapes is observed for increasing neutron number. A shape transition can be seen from deformed to spherical through flat minima. The ground state of ^202Os has a sharp global minimum at β_2=0.0 refers to N = 126 neutron magicity. Again in the neutron-rich region, the sphericity disappears and the global minima move to the well-deformed prolate side. The ^214-238Os isotopes show well-deformed prolate shapes, and the prolate minima lie 2.0-3.0 MeV deeper than the oblate minima in the case of DD-ME1 and DD-ME2 calculations while DD-PC1, DD-PCX interactions give 2.0-4.0 MeV energy difference for prolate to oblate as a first intrinsic excited state. The isotopes, ^240-254Os coexist as both prolate and oblate in their ground state, while DD-PC1 and DD-PCX interactions are not affirmative regarding the coexistence but favor prolate as a ground state. One can see that spherical minimum for ^260Os supports the N = 184 as a neutron magicity. Our results are in qualitatively good agreement with those earlier predictions obtained in Refs. <cit.>. § CONCLUSION In this study, we provide a comprehensive analysis of the Os isotopic chain, ranging from the beta-stable region to the neutron drip-line region, using a covariant density functional and relativistic self-consistent mean-field description. Various ground state bulk properties, including binding energy, two-neutron separation energy (S_2n), two-neutron shell gap (δ S_2n), neutron pairing energy (E_pair,n), quadrupole deformation parameter, rms radii, and neutron skin thickness (Δ r) are calculated. The results obtained from the covariant density functional theory (CDFT) and relativistic mean-field (RMF) approach are compared with the self-consistent Hartree-Fock-Bogoliubov (HFB) formalism based on the Skyrme SLy4, SkP, and SkM* interactions <cit.>, as well as the extended HFB theory (CHFB+5DCH) with the Gogny D1S interaction <cit.>. These calculations are further compared with available experimental data <cit.>, and a good agreement is observed between them. In the isotopic series under investigation, prominent shell closures are observed at N = 82, 126, and 184, while N = 118 is suggested to be a shell or subshell closure. These closures are identified based on the behavior of two-neutron separation energy, two-neutron shell gap, and neutron pairing energy. The quadrupole deformation parameters reveal shape transitions across the isotopic series. Additionally, significant evidence of shape coexistence is found in in ^196Os and ^248-254Os, which is consistent with the HFB results. In order to investigate the phase shape transition in the even-even osmium isotopic series, we employ the Covariant Density Functional Theory (CDFT) with separable pairing. Our calculations utilize different parameter sets, namely the density-dependent DD-ME1 <cit.>, DD-ME2 <cit.>, DD-PC1 <cit.>, DD-PCX <cit.>, and nonlinear NL3* <cit.> interactions. The ground state configurations are determined by locating the minima on the potential energy curve, which allows us to study phenomena such as shape coexistence and phase shape transitions. It is worth noting that the nuclear shape transitions primarily occur at shell closures, indicating the transitional nature of osmium isotopes. Notably, the meson exchange and point coupling interactions (i.e., DD-ME1, DD-ME2, DD-PC1, DD-PCX) yield consistent behavior in the evolution of shape within the potential energy curves. The analysis of these results leads us to the conclusion that the potential energy curves are insensitive to the choice of interactions used in the Density Functional Theory. The results obtained from CDFT using various effective interactions provide a reasonable description of the ground-state bulk properties. Furthermore, these findings exhibit strong similarities with non-relativistic Hartree-Fock-Bogoliubov (HFB) calculations and other theoretical predictions. Consequently, we can confidently state that the outcomes presented in this study are independent of the specific models and force parameters employed, thus highlighting the authenticity and robustness of our predictions. § ACKNOWLEDGEMENTS One of the authors (UR) would like to thank the Department of Physics, Aligarh Muslim University, for using the computational facility. spphys
http://arxiv.org/abs/2405.10075v1
20240516131443
HecVL: Hierarchical Video-Language Pretraining for Zero-shot Surgical Phase Recognition
[ "Kun Yuan", "Vinkle Srivastav", "Nassir Navab", "Nicolas Padoy" ]
cs.CV
[ "cs.CV", "cs.AI" ]
HecVL University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France IHU Strasbourg, Strasbourg, France CAMP, Technische Universität München, Munich, Germany HecVL: Hierarchical Video-Language Pretraining for Zero-shot Surgical Phase Recognition Kun Yuan1,3 Vinkle Srivastav 1,2 Nassir Navab3 Nicolas Padoy1,2 May 20, 2024 ======================================================================================= Natural language could play an important role in developing generalist surgical models by providing a broad source of supervision from raw texts. This flexible form of supervision can enable the model's transferability across datasets and tasks as natural language can be used to reference learned visual concepts or describe new ones. In this work, we present HecVL, a novel hierarchical video-language pretraining approach for building a generalist surgical model. Specifically, we construct a hierarchical video-text paired dataset by pairing the surgical lecture video with three hierarchical levels of texts: at clip-level, atomic actions using transcribed audio texts; at phase-level, conceptual text summaries; and at video-level, overall abstract text of the surgical procedure. Then, we propose a novel fine-to-coarse contrastive learning framework that learns separate embedding spaces for the three video-text hierarchies using a single model. By disentangling embedding spaces of different hierarchical levels, the learned multi-modal representations encode short-term and long-term surgical concepts in the same model. Thanks to the injected textual semantics, we demonstrate that the HecVL approach can enable zero-shot surgical phase recognition without any human annotation. Furthermore, we show that the same HecVL model for surgical phase recognition can be transferred across different surgical procedures and medical centers. § INTRODUCTION [This manuscript has been accepted for publication and will be included in the proceedings of MICCAI 2024.] Developing a single neural network model capable of adapting to different datasets and tasks stands as a key objective for computer vision. Recent breakthroughs in computer vision methods have begun to fulfill this goal by transitioning from task-specific models <cit.> to generalist models <cit.>. These generalist models have shown potential in solving a wide range of downstream tasks and datasets, including various types of object segmentation <cit.> and zero-shot image and video classification <cit.>. An essential feature of these models is their ability to be supervised through natural language texts. The generality of natural language allows it to express a broader set of visual concepts, thereby effectively guiding and supervising these models <cit.>. Yet, within the domain of surgical video analysis, predominant methods still lean toward task-specific models <cit.>. This is mainly due to the inherent complexity present in the surgical videos, i.e., surgical videos can last several hours while capturing intricate hierarchical surgical activities. Therefore, those methods manually define different levels of categories and annotate large amounts of frames to provide extensive supervision. However, the procedure- and center-specific annotations lead to degraded transferability across procedures and medical centers <cit.>. While a surgical foundation model <cit.> is proposed to address the above issue, it focuses only on pure images and ignores the complementary information from other modalities, i.e., language. Also, it still requires finetuning on the downstream dataset to enable transferability. Considering that natural language texts have become a unifying element for generalist models, this work explores whether they can be used to both understand the hierarchical intricacies of surgical videos and enable the generalized zero-shot transfer by processing category labels into texts, without the need for manual annotation. As the task of surgical phase recognition is essential for computer-assisted surgery <cit.>, we use it as a suitable test bench to evaluate our joint visual and textual hierarchical representations. This work introduces HecVL, a Hierarchical Encoded Contrastive Video-language pretraining framework, which learns rich multi-modal representations at different hierarchies of surgical video. Developing such an approach presents a significant challenge due to the lack of surgical video datasets with hierarchical textual supervision. SurgVLP <cit.> has introduced the first large-scale video-text paired dataset, i.e., SVL, by transcribing hundreds of surgical lecture videos into narration texts. We extend SVL dataset by incorporating hierarchical-level texts using the metadata of each lecture video. We construct three levels of the hierarchical video-text pairs for each surgical lecture video: clip-level, phase-level, and video-level. The clip-level video-text pairs contain short video clips of few seconds duration along with narration texts transcribed from lecture audio for capturing the short-term activity. The phase-level video-text pairs contain longer video segments with conceptual text summaries for capturing longer surgical video activity. Finally, the video-level video-text pairs are the entire surgical lecture videos paired with abstract paragraphs encapsulating the goal and the main key points of the surgery. These three levels of hierarchical video-text pairs allow for a more detailed understanding of surgical procedures, capturing both the atomic details and broader contexts, as illustrated in Fig. <ref>. Given the hierarchical video-text pair dataset, we propose a fine-to-coarse contrastive learning strategy to effectively exploit the hierarchical textual information encoded in the dataset. We construct three separate embedding spaces for each type of hierarchical video-text pair. We first build up a fine-grained embedding space using clip-level video-text pairs, followed by aggregating the fine-grained features to construct the coarse-grained embedding spaces, which embed phase-level and video-level text pairs. We learn these three different embedding spaces through multi-modal contrastive learning using the InfoNCE loss <cit.>. We show in the experiments that our fine-to-coarse contrastive learning strategy outperforms the approach of projecting all hierarchical texts into a single embedding space and learning only one such space. We demonstrate the zero-shot transferability and the generalization of our approach by performing surgical phase recognition on three different surgical procedures, cholecystectomy <cit.>, hysterectomy <cit.>, and gastric bypass <cit.>, without using any ground truth labels. The learned multi-modal representations demonstrate their transferability in not only identifying surgical concepts across various surgical procedures but also in extending to different medical centers. We hope that the HecVL approach could pave the path for developing more generalist models in the domain of surgical computer vision. § METHOD We propose HecVL, a novel hierarchical video-language pretraining method that learns multi-modal embeddings by capturing clip-, phase-, and video-level video-text pairs from surgical lecture videos. Fig. <ref> gives an overview of our method. Sec. <ref> describes the construction of hierarchical video-text pairs. Sec. <ref> formalizes the fine-to-coarse contrastive learning strategy. Sec. <ref> and <ref> describe the loss function and the training pipeline, respectively. §.§ Hierarchical video-text pairs The HecVL approach is designed to leverage a hierarchically annotated video-text pair dataset, D = { (V_i, N_i, C_i, A_i) }_i=1^|D|, where V_i is a long surgical lecture video composed of a sequence of short-term video clips (each lasting tens of seconds). Each lecture video V_i is paired with three levels of textual annotations from different levels of granularities ranging from fine-grained to coarse-grained, i.e., clip-level narration texts (N_i), phase-level concept texts (C_i), and video-level abstract texts (A_i). The clip-level narration texts (N_i) are sequences of narrations describing the atomic actions for short-term video clips, the phase-level concept texts (C_i) are sequences of conceptual text summaries describing the high-level surgical activities for long-term video phases, and the video-level abstract paragraph texts (A_i) are the abstract paragraph texts summarizing the entire surgical lecture video including patient's history and surgical technique. These three levels of video-text pairs provide complementary textual supervision at multiple hierarchies for representation learning, as illustrated in Fig. <ref>. §.§ Fine-to-coarse contrastive learning Given the hierarchically annotated video-text pair dataset as described above, we aim to optimize a visual encoder ℱ_v and a textual encoder ℱ_t for the multi-modal hierarchical representation learning. This is achieved by constructing different embedding spaces for hierarchical video-text pairs, as described below. §.§.§ Embedding spaces at different hierarchical levels: given the visual encoder ℱ_v and the textual encoder ℱ_t, we first extract clip-level visual embeddings ℱ_v(v_ij) and textual embeddings ℱ_t(n_ij) from the short-term video clips v_ij∈ V_i and their corresponding clip-level narration texts n_ij∈ N_i. These multi-modal embeddings are represented in the fine-grained embedding space S_narration. Then, we construct another embedding space S_concept by exploiting phase-level textual supervision. We define V^c and N^c as the sets of short-term video clips v_ij and narration texts n_ij temporally corresponding to phase-level concept texts c_ij∈ C_i. We extract the textual embeddings ℱ_t(c_ij) using the textual encoder ℱ_t. Subsequently, we define an aggregator function Agg(), which takes clip-level visual and textual embeddings, V^c and N^c, as input and performs average pooling on them. The aggregated visual embeddings Agg(ℱ_v(V^c)) and textual embeddings Agg(ℱ_t(N^c)) are represented in embedding space S_concept. Finally, we construct a video-level embedding space, S_abstract, using video-level abstract texts (A_i). Similar to constructing the phase-level embedding space, we define V^a and N^a as the evenly sampled sets of short-term video clips v_ij and narration texts n_ij. As there is only one abstract text for the entire video, V^a and N^a correspond to the video-level abstract texts A_i. We extract the textual embeddings ℱ_t(A_i) from the abstract text A_i. Then, the aggregator function Agg() is employed to aggregate the clip-level visual and textual embeddings. The resulting aggregated visual embeddings Agg(ℱ_v(V^a)) and textual embeddings Agg(ℱ_t(N^a)) are represented in space S_abstract. The construction of all three embedding spaces is illustrated in the Fig. <ref>. Fig. <ref> (a) shows another way to conduct video-language pretraining by projecting all the video clips and the three levels of texts to a single embedding space. We show in experiments that this increases the ambiguity as video clips might be pushed to both narration and concept texts with dissimilar semantics. §.§ Training objectives We propose a joint contrastive loss function to enhance the similarity score between matching visual and textual embeddings compared to non-matching pairs in S_narration, S_concept, and S_abstract. At the clip level, we use the loss function from the SurgVLP <cit.> ℒ_clip (also given in the supplementary) to correlate short-term video clips with the narrations from two different automatic speech recognition (ASR) systems. At the phase- and the video-level, we use the InfoNCE <cit.> loss to correlate the aggregated short-term visual and textual embeddings to phase- and video-level textual embeddings, respectively, as given below: 0.69ℒ_phase = - 1/B∑_i=1^Blog( exp(Agg(ℱ_v(V^c))^T·ℱ_t(c_i)/τ)/∑^B_j=1exp(Agg(ℱ_v(V^c))^T·ℱ_t(c_j)/τ) + exp(Agg(ℱ_t(N^c))^T·ℱ_t(c_i)/τ)/∑^B_j=1exp(Agg(ℱ_t(N^c))^T·ℱ_t(c_j)/τ)) 0.69ℒ_video = - 1/B∑_i=1^Blog( exp(Agg(ℱ_v(V^A))^T·ℱ_t(A_i)/τ)/∑^B_j=1exp(Agg(ℱ_v(V^A))^T·ℱ_t(A_j)/τ) + exp(Agg(ℱ_t(N^A))^T·ℱ_t(A_i)/τ)/∑^B_j=1exp(Agg(ℱ_t(N^A))^T·ℱ_t(A_j)/τ)) Here, B is the batch size, and τ is a temperature hyper-parameter, which regulates the probability distribution over positive and negative pairs within the embedding space. The numerator term denotes the cosine similarity between matched visual and the textual pairs, i.e., positive pairs, and the denominator term denotes the cosine similarity between the unmatched visual and the textual pairs, i.e., negative pairs. §.§ Training pipeline Given the previously described training objectives, the challenge lies in effectively training ℱ_v and ℱ_t across all three levels of embedding spaces. We aim to train only one set of visual and textual encoders for all three levels of embedding spaces, ensuring the encoders are optimized for capturing both short-term and long-term semantics. We propose an alternating training strategy, i.e., we first optimize L_clip for m batches; subsequently, we optimize L_phase for n batches and L_video for l batches, and then we repeat. We observe that the proposed training strategy not only converges faster but also circumvents the catastrophic forgetting issue <cit.> that could arise when training the model on clip-level embeddings and then fine-tuning it for phase- and video-level embeddings, or vice-versa. § EXPERIMENT SETUP §.§ Dataset Pretraining dataset: we use the surgical lecture videos from the Surgical Video Lecture dataset (SVL) for the pretraining, which was proposed by the SurgVLP <cit.>. We further expand the dataset by including additional phase- and video-level video-text pairs using the metadata of each lecture video. The metadata for each lecture video contains the title of the procedure, the abstract summary, and the key steps. In total, we have 25,578 clip-level, 10,304 phase-level, and 1,076 video-level video-text pairs. Downstream datasets and evaluation: we perform the evaluation on three public datasets: Cholec80 <cit.>, AutoLaparo <cit.>, StrasBypass70 and BernBypass70 <cit.>. In our evaluation, we perform surgical phase recognition in the zero-shot setting, which directly evaluates the model on downstream datasets without performing any fine-tuning. Here, class labels are transformed into textual prompts, and their embeddings categorize the image visual embeddings, reflecting the joint embedding space's effectiveness. (details of the constructed textual prompts are given in the Supplementary). §.§ Implementation details Network architecture: We use the ResNet-50 model <cit.> pretrained on ImageNet as visual encoder ℱ_v and BioClinicalBert <cit.> as the textual encoder ℱ_t. We sample 4, 8, and 32 frames for each clip-level, phase-level, and video-level video segment. We encode the frames, followed by an average pooling to generate a feature vector for a video segment. The architectures of visual and text encoders, ℱ_v and ℱ_t, are the same as SurgVLP <cit.> for a fair comparison. Training parameters: we pretrain the model with one 80 GB NVIDIA A100 GPUs for 200 epochs. We use the AdamW <cit.> optimizer with a learning rate of 5e-5. We alternatively train with m=25 batches of clip-level pairs, followed by n=15 and l=115 batches of phase- and video-level pairs. We use a batch size B of 120/60/10 per GPU for clip-/phase-/video-level video-text pairs. § RESULTS AND DISCUSSIONS §.§ Zero-shot phase recognition Results on zero-shot surgical phase recognition demonstrate if the learned joint visual and textual representations can correlate semantically similar surgical scene images and surgical texts. We compare our method to CLIP <cit.> to show the benefits of surgical-specific pretraining, and to SurgVLP to emphasize hierarchical pretraining advantages. Tab. <ref> and <ref> show that our HecVL achieves state-of-the-art performance for all the datasets in the zero-shot setting. The consistent boost across cholecystectomy <cit.>, hysterectomy <cit.>, and gastric bypass <cit.> procedures show the generalizable and transferable features of HecVL across different surgical types. Also, we show significant improvement compared to the methods pretrained on the conventional computer vision datasets, i.e., MIL-NCE <cit.>, CLIP <cit.>, which fails in recognizing the surgical concepts. §.§ Multi-center phase recognition Here, we examine the ability of the HecVL approach to transfer the knowledge learned from hierarchical video-text data to different medical centers, as shown in Tab. <ref>. Overall, our HecVL model achieves the best performance across two medical centers compared to the other methods. Interestingly, the performance of the BernBypass70 is lower than the StrasBypass70. This may be because there are significant differences in the workflow followed at the Bern center, with many phases and steps not routinely performed. Therefore, the textual prompts designed based on the Strasbourg center's protocol (see Supplementary for more details) lead to degraded performance when applied to the different centers. To address this, a center-specific textual prompts construction is required. §.§ Ablation study Tab. <ref> provides the ablation analysis of the contribution of each level of video-text pairs. Specifically, adding the phase-level video-text pairs yields significant improvements in gastric bypass surgical phase recognition compared to the SurgVLP. This trend is pronounced in the zero-shot scenario across all surgical procedures. We also compare to a baseline model: Single in Fig. <ref> (a), which embeds action, phase, and abstract texts in a single embedding space, to support our fine-to-coarse strategy. Tab. <ref> shows that Single model has inconsistent performance across datasets. This inconsistency implies that the single embedding approach might blur essential distinctions across hierarchical levels compared to maintaining separate embedding spaces for different hierarchical levels. § CONCLUSION The next generation of scalable and generalizable surgical computer vision systems demands multi-modality models capable of adapting to different surgical procedures with little to no manual annotations. In this work, we design HecVL, a single multi-modality model HecVL capable of adapting to the different surgical procedures and centers without using any manual annotations. The core of our contribution lies in developing a hierarchical contrastive learning strategy to exploit textual supervision at multiple granular levels, ranging from short-term surgical actions to long-term high-level surgical concepts. Extensive experimental results demonstrate its efficacy in achieving zero-shot surgical phase recognition across different procedures and medical centers. § ACKNOWLEDGMENTS This work has received funding from the European Union (ERC, CompSURG, 101088553). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. This work was also partially supported by French state funds managed by the ANR under Grant ANR-10-IAHU-02. This work was granted access to the HPC resources of IDRIS under the allocations AD011013704R1, AD011011631R2, and AD011011631R3 made by GENCI. The authors would also like to acknowledge the High-Performance Computing Center of the University of Strasbourg for providing access to computing resources funded by the Equipex Equip@Meso project (Programme Investissements d’Avenir) and the CPER Alsacalcul/Big Data. splncs04
http://arxiv.org/abs/2405.09691v1
20240515204146
Modeling User Preferences via Brain-Computer Interfacing
[ "Luis A. Leiva", "Javier Ttraver", "Alexandra Kawala-Sterniuk", "Tuukka Ruotsalo" ]
cs.HC
[ "cs.HC", "cs.AI" ]
0000-0002-5011-1847 University of Luxembourg Luxembourg 0000-0002-1596-8466 INIT, Universitat Jaume I Spain 0000-0001-7826-1292 Opole University of Technology Poland 0000-0002-2203-4928 University of Copenhagen Denmark LUT University Finland Present Brain-Computer Interfacing (BCI) technology allows inference and detection of cognitive and affective states, but fairly little has been done to study scenarios in which such information can facilitate new applications that rely on modeling human cognition. One state that can be quantified from various physiological signals is attention. Estimates of human attention can be used to reveal preferences and novel dimensions of user experience. Previous approaches have tackled these incredibly challenging tasks using a variety of behavioral signals, from dwell-time to clickthrough data, and computational models of visual correspondence to these behavioral signals. However, behavioral signals are only rough estimations of the real underlying attention and affective preferences of the users. Indeed, users may attend to some content simply because it is salient, but not because it is really interesting, or simply because it is outrageous. With this paper, we put forward a research agenda and example work using BCI to infer users’ preferences, their attentional correlates towards visual content, and their associations with affective experience. Subsequently, we link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences. <ccs2012> <concept> <concept_id>10010405.10010455.10010459</concept_id> <concept_desc>Applied computing Psychology</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010216.10010217</concept_id> <concept_desc>Computing methodologies Cognitive science</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Applied computing Psychology [500]Computing methodologies Cognitive science Modeling User Preferences via Brain-Computer Interfacing Tuukka Ruotsalo May 20, 2024 ======================================================== § INTRODUCTION Estimating which parts of some digital content are likely to draw the users’ interest, and whether the parts are experienced with high or low intensity, positively, or negatively by the user. However, classification of cognitive and affective states provide only simple estimates of how neural responses map to discrete states of human cognition. Conversely, recent research has recognized that estimating cognitive and affective states, such as attention, valence, and arousal can be effectively used for many downstream tasks even when the accuracy of the upstream classification model is modest <cit.>. These approaches build upon developing models of personal cognitive profiles or crowd models that can be effective even when single-trial classification cannot reach robust performance. We demonstrate the use of such models in various downstream applications, including information retrieval, affective similarity estimation, steering generative models, and crowdsourced approaches. Further benefits of attention and affective estimation have been demonstrated in crowd settings, in which reactions of many individuals can be combined for more consistent and reliable estimates of what in content drags users' attention and how they experience that content. Here, we put forward a research agenda to study inference of human attention and preferences from BCI as captured in response to naturalistic perception of digital information. Furthermore, we present how sigals from crowds of users can be used via brainsourcing, i.e. crowdsourced BCI signal acquisition <cit.>. This can allow an accurate estimation of user preferences, attention allocation, and—critically—the affective component of attention, directly measured from the natural and implicit brain potentials evoked in users' responses. We demonstrate how to gather and utilize the resulting data in single-user and crowdsourcing settings to reveal how users react to different stimuli and how their attention and affective responses can be predicted. These responses produce consistent measures of user experiences that go beyond of what is possible with behavioral data. § RESEARCH CHALLENGES In the following we summarize current research challenges and example works in BCI instrumentation, preference estimation, predicting affective states, crowdsourcing affective annotation, and the importance of open data. §.§ Instrumentation Both electroencephalography (EEG) or functional Near-Infrared Spectroscopy (fNIRS) stand out as the predominant non-invasive methods for acquiring brain data <cit.>. EEG directly captures the brain's bio-electrical activity by recording electrical fluctuations via electrodes placed on the scalp <cit.>. In contrast, fNIRS relies on optical techniques to detect changes in hemodynamics <cit.>, typically induced by cortical responses during motor, cognitive, and perceptual functions of the brain. Each method has its own set of advantages and disadvantages, often leading to their integration for a more comprehensive examination <cit.>. EEG signals, for example, have usually a low signal-to-noise ratio and low spatial resolution <cit.>. This makes it difficult to identify what brain areas are activated by a particular response <cit.>. In fNIRS, scattering, occurring about 100 times more frequently than absorption, leads to light attenuation. The longer the path of the photon due to scattering, the greater the likelihood of absorption. However, the light emitted by a source can be captured by multiple detectors, eliminating thus the necessity for additional components <cit.>. §.§ Estimating Preferences One of the lasting challenges for user modelling has been to recognize and predict individual preferences. Recently, neurophysiological data has been employed for learning user prefences. The advantage of such data is that it can be obtained implicitly without requiring any intentional interaction. As such, preferences can be inferred from brain data captured in response to natural stimuli users are experiencing. However, the reliability of such information and its utility in downstream applications has not been studied. Our previous work has proved useful for quantifying the reliability of preferences inferred from brain responses and their utility as feedback to generative models to produce new, unseen information that matches the inferred preference information <cit.>. Recently, it has become possible to predict preference information from brain signals measured via EEG and fNIRS. We studied whether individuals' preferences contradict with group preferences <cit.>. That is, whether we can detect preferences of individuals toward group preferences even when the individual does not explicitly disclose such preference. Experimental evidence shows that brain activity collected from participants produced in response to viewing images is associated with their self-reported preferences. Our results show that brain responses present a graded response to preferences, and that brain responses alone can be used to train classifiers that reliably estimate preferences. Furthermore, we show that brain responses reveal additional preference information that correlates with group preference, even when participants self-reported having no such preference. Our analysis of brain responses carries significant implications for using brain responses for preference inference, as it suggests an individual's explicitly reported preferences are not always aligned with the preferences inferred from their brain responses. These findings call into question the reliability of explicit and behavioral signals. They also imply that additional, multimodal sources of information may be necessary to infer reliable preference information. Preference information can also be used in downstream applications to adapt content and user interfaces according to the estimated preferences. We have studied such cognitive integration with generative image models to model personal attraction <cit.>. We demonstrate models that use generative adversarial neural networks (GANs) to model subjective preferences unconstrained by predefined model parameterization. GANs are coupled with brain-computer interfaces to capture personalized attractiveness reactions toward images generated from a model. These reactions are then used to control a GAN model, finding a representation that matches the features constituting an attractive image for an individual. We show that our approach yielded highly accurate generative outputs and replicate findings from social neuroscience, suggesting that the individually responsive, generative nature of GANs and BCI provide a powerful, new tool in mapping individual differences and visualizing cognitive-affective processing. §.§ Relevance and Affective States Information retrieval (IR) relies on a general notion of relevance, which is used as the principal foundation for ranking and evaluating methods. However, IR does not account for more a nuanced affective experiences of users. We consider the emotional response decoded directly from the human brain as an alternative dimension of relevance <cit.>. We report an experiment covering seven different scenarios in which we measure and predict how users emotionally respond to visual image contents by using fNIRS neuroimaging on two commonly used affective dimensions: valence (negativity and positivity) and arousal (boredness and excitedness). Our results show that affective states can be successfully decoded using fNIRS, and utilized to complement the present notion of relevance in IR studies. Our work opens new avenues for incorporating emotional states in IR evaluation, affective feedback, and information filtering. Affective information extends the notion of relevance toward understanding experiences of visual similarity. The present notion of visual similarity is based on features derived from image contents. This ignores the users' emotional or affective experiences toward the content, and how users feel when they search for images. We consider valence, a positive or negative quantification of affective appraisal, as a novel dimension of image similarity. We report the largest neuroimaging experiment that quantifies and predicts the valence of visual content by using functional near-infrared spectroscopy from brain-computer interfacing <cit.>. We show that affective similarity can be (1) decoded directly from brain signals in response to visual stimuli, (2) utilized for predicting affective image similarity with an average accuracy of 0.58 and an accuracy of 0.65 for high-arousal stimuli, and (3) effectively used to complement affective similarity estimates of content-based models; for example when fused fNIRS and image rankings the retrieval F-measure@20 is 0.70. This work encourages new research on affective multimedia analysis, retrieval, and user modeling. §.§ Crowdsourcing Affective Annotations Automatic annotation of multimedia contents in terms of their affective contents can be useful for a range of interactive tasks. A convenient alternative to content-based analysis is to rely on brain signals from several participants who are exposed to those contents. In this line of research, a brainsourcing experiment was conducted <cit.> relying on the fNIRS signals from 31 participants who just passively watched a set of images. It was shown that the prediction of the valence and arousal of the images improves with the crowdsize. For example, the mean accuracy of individual predictions for valence is lower than 60%, and increases up to 65% with a crowdsize of just 5 participants. More recently, the concept has been explored also in the context of EEG signals and video stimuli, with similarly positive results <cit.>. In this case, however, it was observed that the crowdsourcing was not effective for all videos. Therefore, an open issue is to either improve the prediction to be more generally effective, or to identify under which conditions the brainsourcing-based prediction is not reliable. The source of the difficulty may be in either the source stimuli (whose corresponding emotions might be more ambiguous), or in the quality of the brain signals, or in the limitations of the predictive machine learning model or the robustness of the agreement procedure. §.§ Importance of Open Data A critical aspect to boost the research in the scope of affective BCI is high-quality public datasets, well-documented and with accompanying code, processed data, and optionally pre-trained models, following the FAIR principles[<https://www.go-fair.org/fair-principles>]. In this regard, the NEMO dataset <cit.> has been recently released, featuring fNIRS recordings from thirty-one participants who engaged in an emotional perception task and in affective imagery task. An equally important ingredient is sticking to good practices in terms of data usage and reporting. An obvious but important example is avoiding data leakage. For example, in the context of brain signals it is relatively common to split the EEG signal into temporal segments which are treated as independent instances. Although this increases the number of training/testing instances, segments temporally close in the signal or coming from the same participant and/or the same eliciting stimulus can be highly correlated <cit.>. Furthermore, in supervised learning, these segments usually inherit the sequence-level label, which might not always be a good choice since different temporal parts of the brain signal may carry different information, mostly for long stimulus such as video snippets. Thus, generally speaking, depending on how data is partitioned into training and test sets, the performance can vary notably <cit.>, which may hinder the advances in the field in various ways. In the deep learning area, with complex models which can easily overfit the training set, it can be hard to find out whether the reported high performances are actually attributable to the merits of the proposed algorithms and models or they implicitly hide data-related artifacts. Two main challenges remain for BCI-related datasets. On the one hand, large-scale datasets are required to properly train the data-hungry deep learning models. This is currently difficult in the context of brain signals, due to the cost of the equipment and the huge effort involved in long recording sessions with multiple participants. In the future, one may envision foundation models in the context of BCI, with BENDR being one example <cit.>. On the other hand, multimodality (e.g. including several types of brain signals, eye tracking data, diverse multimedia stimuli beyond images/sound/video) would facilitate research on a wider and richer set of topics and more powerful models, which can more heavily rely on self-supervised learning, and explore joint and multi-task learning. §.§ Ethical Implications Portable sensor technology has already made possible real-time monitoring of physiological signals. This opens new avenues for individuals and data-based service providers. However, it also introduces risks for privacy and user autonomy <cit.>. As user monitoring technology evolves to identify emotional characteristics, reaching a level of prevalence similar to behavioral tracking on personal computers and smartphones, service providers may gain access to affective information. Consequently, the potential for unethical applications of physiological monitoring and other wearable hardware to expose cognitive and emotional user attributes may arise. Recent studies suggest that growing awareness of data usage is causing users to exhibit increased caution when engaging with technology capable of divulging detailed information about their affective and cognitive states <cit.>. The capacity to model human attention, affect, preferences and behaviors with unprecedented details has already transformed the Internet and service economy, but physiological data can provide an additional layer of rich, personal, and sensitive data that can enhance user experiences and provide improved benefits to users. On the other hand, neurophysiological data are multifaceted and can be utilized for unintended use. This makes it comparable to the privacy risks of those data presently used to model users in their digital environments. However, neurophysiological data, while noisy, has much higher fidelity in terms of which dimensions of users' experiences it allows to be inferred. As we start to live with sensors capable of measuring neurophysiological signals extensively and provide it as input for various applications, as also discussed in this paper, it can lead to systems that are much more accurate than what the sensor accuracies reported today would suggest. Research and regulatory measures must guarantee that data is utilized solely for purposes explicitly agreed upon by users. It is imperative to establish privacy methods and practices empowering users to maintain control over the utilization and sharing of the data gathered from them. Although it might seem straightforward at first, neurophysiological data enabling affective inference is considerably more sensitive than the current behavioral data collected by computing services. Therefore, the data must be used ethically and regulated with adequate policies that can prevent lasting threats to the public. To this end, there are already regulative actions circumventing unethical use. For instance, the EU AI act[<https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai>] prevents AI systems for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data in the workplace and educational institutions. § ACKNOWLEDGMENTS Research supported by the Horizon 2020 FET program of the European Union (grant CHIST-ERA-20-BCI-001), the European Innovation Council Pathfinder program (SYMBIOTIK project, grant 101071147), and the National Science Centre, Poland (grant 2021/03/Y/ST7/00008). This publication is part of the project PCI2021-122036-2A, funded by MCIN/AEI/10.13039/5011[0]00011033 and European Union NextGenerationEU/[0]PRTR. This work also received support from the Academy of Finland (grants 322653, 328875, 336085, 350323, and 352915). ACM-Reference-Format
http://arxiv.org/abs/2405.09067v1
20240515033654
Mass spectra of strange double charm pentaquarks with strangeness $S=-1$
[ "Zi-Yan Yang", "Qian Wang", "Wei Chen" ]
hep-ph
[ "hep-ph" ]
qianwang@m.scnu.edu.cn chenwei29@mail.sysu.edu.cn ^1Key Laboratory of Atomic and Subatomic Structure and Quantum Control (MOE), Guangdong Basic Research Center of Excellence for Structure and Fundamental Interactions of Matter, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China ^2Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Guangdong Provincial Key Laboratory of Nuclear Science, Southern Nuclear Science Computing Center, South China Normal University, Guangzhou 510006, China ^3School of Physics, Sun Yat-sen University, Guangzhou 510275, China ^4Southern Center for Nuclear-Science Theory (SCNT), Institute of Modern Physics, Chinese Academy of Sciences, Huizhou 516000, Guangdong Province, China The observation of the T_cs̅(2900) indicates the potential existence of strange double charm pentaquarks based on the heavy antidiquark symmetry. We systematically study the mass spectra of strange double charm pentaquarks with strangeness S=-1 in both molecular and compact structures for quantum numbers J^P=1/2^-, 3/2^-, 5/2^-. By constructing the interpolating currents, the mass spectra can be extracted from the two-point correlation functions in the framework of QCD sum rule method. In the molecular picture, we find that the Ξ_c^+D^∗ +, Ξ_c^'+D^∗ +, Ξ_c^∗ +D^∗ +, Ξ_cc^∗ ++K^∗ 0 and Ω_cc^∗ ++ρ^0 may form molecular strange double charm pentaquarks. In both pictures, the masses of the J^P=1/2^-, 3/2^- pentaquarks locate within the 4.2-4.6 GeV and 4.2-4.5 GeV regions, respectively. As all of them are above the thresholds of their strong decay channels, they behave as a broad state, making them challenging to be detected in experiment. On the contrary, the mass of the J^P=5/2^- strange double charm pentaquark is located at 4.3 GeV and below its strong decay channel. This makes it as a narrow state and easy to be identified in experiment. The best observed channel is its semi-leptonic decay to double charm baryon. As the result, we strongly suggest experiments to search for J^P=5/2^- strange double charm pentaquarks as a first try. 12.39.Mk, 12.38.Lg, 14.40.Ev, 14.40.Rt Mass spectra of strange double charm pentaquarks with strangeness S=-1 Wei Chen^3,4 May 20, 2024 ======================================================================== arabic § INTRODUCTION The study of multiquarks goes back to half a century after the quark model proposed in 1964  <cit.> and becomes a hot topic since the observation of the X(3872) in 2003. Due to the sufficient statistic in experiment, tens of charmonium-like and bottomonium-like states are observed by various experimental collaborations <cit.>. These states could be beyond the conventional quark model and be viewed as exotic candidates. They also provide a novel platform to shed light on the hadronization mechanism. Although numerous theoretical efforts have been put forward to understand their nature <cit.>, it is still unclear about the hadronization mechanism. Because of the observation of these exotic candidates, in a more general concept, all the bosons are defined as mesons and all the fermions are defined as baryons. The latter one is more complicated than the former one due to one additional (anti)quark. The first well-established exotic baryon signal is reported by the LHCb collaboration in the J/ψ p invariant mass distribution of the Λ_b^0→ J/ψ K^-p process <cit.>. The two structures are named as P_c(4380) and P_c(4450). Four years later, with an order-of-magnitude larger statistic data, the LHCb collaboration further reported their hyperfine structures <cit.>. The P_c(4380) is split into two structures P_c(4440) and P_c(4457) and a new narrow peak P_c(4312) emerges. In 2020, the LHCb collaboration observed the strange partner, i.e.the P_cs(4459) state, of the P_c states in the J/ψΛ invariant mass distribution of the Ξ_b^-→ J/ψΛ K^- process <cit.>. Very recently, another very narrow resonance P_cs(4338) is reported in the J/ψΛ invariant mass of the B^-→ J/ψΛp̅ process, with the preferred J^P=1/2^- at 90% confidence level <cit.>. Due to their observed channel, the quark contents of P_c and P_cs are uudcc̅ and udscc̅ respectively, indicating they are hidden charm pentaquarks. The recent discovery of double charm tetraquark T_cc^+(3875) <cit.> raises a question whether double charm pentaquarks exist or not. Some theoretical attempts have made for considering the mass spectrum from the hadronic molecular picture <cit.> and the compact pentaquark picture <cit.>. For the double charm pentaquarks, the observed Ξ_cc <cit.> provide an important input from the experimental side. In Ref.<cit.>, the authors work on a Bethe-Salpeter equation with the interaction respecting heavy quark spin symmetry and predict a D^(*)Ξ_c^('*) bound state. Ref.<cit.> works in color-magnetic interaction and predicts several compact QQqqq̅ states, which could be searched in the Ω_ccπ, Ξ_ccK and Ξ_cD channels. Ref.<cit.> performs a study within the double heavy triquark-diquark framework respecting SU(3) flavor symmetry. The study finds several stable double charm pentaquarks, for instance a J^P=1/2^- ccs̅ud double charm pentaqurk, against their strong decay channels. Ref. <cit.> considers the potential double charm pentaquarks P_cc with quark content ccudd̅ in QCD sum rule method. In this work, we further consider the double charm pentaquarks P_cc with quark content ccusd̅, based on the heavy antiquark-diquark symmetry (HADS) proposed in Ref. <cit.>. The HADS states that a color triplet double heavy quarks behaves like a heavy antiquark in color space. In this case, the observed T_cs̅0^a(2900)^0 <cit.> with quark content cdu̅s̅ indicates the potential existence of strange double charm pentaquark ccusd̅. From the HADS point of view, the mass of double heavy pentaquark satisfies the relation m(QQqqq̅)-m(QQq̅q̅)=m(qqq̅Q̅)-m(Q̅q̅q̅), by replacing Q̅→ QQ, as two heavy quarks in color antitriplet behave like a steady color source from a heavy antiquark. The essence of Eq. (<ref>) can be traced back to Refs. <cit.>. From this point of view, the recently observed T_cc^+(3875), as a isospin singlet state, is related to the Λ̅_c by the HADS. The T_cs̅0^0(2900) indicates the existence of strange double charm pentaquark. Based on the above arguments, we shall systematically study double charm pentaquarks with quark content ccusd̅. To obtain a solid conclusion, we start from both the hadronic molecular currents, i.e. Ξ_c^+D^+, Ξ_cc^++K^0, Ω_cc^+π^+, and the compact pentaquark currents in QCD sum rule approach<cit.>. The paper is organized as follows. In Sec.<ref>, we construct the local pentaquark interpolating currents for double heavy pentaquark states. Using these currents, we perform the parity-projected QCD sum rules analysis in Sec.<ref>. The numerical analysis follows in Sec.<ref>. The results and discussions are presented in Sec.<ref>. § INTERPOLATING CURRENT FOR DOUBLE HEAVY PENTAQUARK In this section we systematically construct the local pentaquark interpolating currents with spin-parity J^P=1/2^-,3/2^-,5/2^-, since these quantum numbers can be achieved with the S-wave ground heavy (or double heavy) baryon and ground charm (or light) meson with quark content QQusd̅. Five flavor configurations [d̅_ds_d][ϵ^abcQ_aQ_bu_c] and [d̅_dQ_d][ϵ^abcQ_au_bs_c], [d̅_du_d][ϵ^abcQ_aQ_bs_c], ϵ^aijϵ^bklϵ^abc[Q_iu_j][Q_ks_l]d̅_c and ϵ^aijϵ^bklϵ^abc[Q_iQ_j][u_ks_l]d̅_c are considered. Here a⋯ d, i⋯ l are color indices, u,d,s represents the up, down and strange quark, Q represents the heavy quark, i.e. charm or bottom quark. The former three flavor configurations have the same color configuration 1_c⊗1_c, which can be related by the famous Fierz transformation: δ^deϵ^abc=δ^daϵ^ebc+δ^dbϵ^aec+δ^dcϵ^abe. The latter two flavor configurations have the same color configuration 3̅_c⊗3̅_c⊗3̅_c. In order to find the correspondence for open charm tetraquark usd̅c̅ and double charm pentaquark ccusd̅ due to HADS, we analyse their spin structure. Insuring the charm diquark has the same color structure, i.e. antisymmetric in color space, as anti-charm quark, the spin structure of diquark should be symmetric due to Pauli principle, which requires the spin of charm diquark should be S_[cc]=1. As the strange charm tetraquark T_cs̅0^0(2900) with quark content [uc̅][sd̅] is a spin singlet state <cit.>, the spin structure of diquark and antidiquark should be [uc̅]_0[sd̅]_0 or [uc̅]_1[sd̅]_1 to form a spin-0 tetraquark state. Thus for open charm tetraquark with J^P=0^+, the spin structure of corresponding HADS pentaquark partner should be 1_[cc]⊗1/2_[u]⊗0_[sd̅]=1/2_[ccusd̅]⊕3/2_[ccusd̅] doublet with spin-0 [sd̅] component or 1_[cc]⊗1/2_[u]⊗1_[sd̅]=1/2_[ccusd̅]⊕3/2_[ccusd̅]⊕5/2_[ccusd̅] triplet with spin-1 [sd̅] component, which indicates that the HADS partner for open charm tetraquark with J^P=0^+ should have spin 1/2,3/2 or 5/2. Analogous discussion can also be made for the molecular structure [sc̅][ud̅] and the compact structure [us][c̅d̅]. With the above considerations, in this work, we discuss various types of pentaquarks with spin 1/2,3/2 and 5/2. §.§ Currents for the heavy baryon-heavy meson molecular pentaquark The currents with color configuration ϵ_abc[u_as_bQ_c][d̅_dQ_d] could well couple to Ξ_c^+D^+(*) molecular states. In previous QCD sum rules analysis <cit.>, one finds that the currents 1/√(2)ϵ_abc[(u_a^TCγ_5s_b-s_a^TCγ_5u_b)c_c], 1/√(2)ϵ_abc[(u_a^TCγ_μγ_5s_b-s_a^TCγ_μγ_5u_b)γ_μ c_c], √(2/3)ϵ_abc[(s^T_aCγ_μ u_b)γ_5c_c+(u^T_aCγ_μ c_b)γ_5s_c+(c^T_aCγ_μ s_b)γ_5u_c], can well couple to Ξ_c^+, Ξ_c^'+ and Ξ_c^∗ + states, respectively. Here Ξ_c^+, Ξ_c^' +, Ξ_c^∗ + belong to the SU(4) flavor spin-1/2 20-plet, spin-1/2 20-plet, spin-3/2 20-plet, respectively. Thus we can construct the following currents to perform QCD sum rule analysis: η_1 = 1/√(2)ϵ_abc[(u_a^TCγ_5s_b-s_a^TCγ_5u_b)Q_c][d̅_dγ_5Q_d], η_2 = 1/√(2)ϵ_abc[(u_a^TCγ_μγ_5s_b-s_a^TCγ_μγ_5u_b)γ_μ Q_c][d̅_dγ_5Q_d], η_3 = 1/√(2)ϵ_abc[(u_a^TCγ_5s_b-s_a^TCγ_5u_b)γ_μ Q_c][d̅_dγ_μ Q_d], η_4μ = 1/√(2)ϵ_abc[(u_a^TCγ_νγ_5s_b-s_a^TCγ_νγ_5u_b)γ_ν Q_c][d̅_dγ_μ Q_d], η_5μ = √(2/3)ϵ_abc[(s^T_aCγ_μ u_b)γ_5Q_c+(u^T_aCγ_μ Q_b)γ_5s_c+(Q^T_aCγ_μ s_b)γ_5u_c][d̅_dγ_5Q_d], η_6 = √(2/3)ϵ_abc[(s^T_aCγ_μ u_b)γ_5Q_c+(u^T_aCγ_μ Q_b)γ_5s_c+(Q^T_aCγ_μ s_b)γ_5u_c][d̅_dγ_μ Q_d], η_7,μν = √(2/3)ϵ_abc[(s^T_aCγ_μ u_b)γ_5Q_c+(u^T_aCγ_μ Q_b)γ_5 s_c+(Q^T_aCγ_μ s_b)γ_5 u_c][d̅_dγ_ν Q_d]+(μ↔ν), where u,d,s denote up, down and strange quarks, respectively. Here Q denotes heavy quarks c or b. T denotes the transpose of quark field. C denotes the charge conjugation operator. The indexes a,b,c,d are the color indices of quark fields. One notices that not all of the above currents are related to the strange charm tetraquark by HADS. Further discussions can be found in Sec. <ref>. It should be noted that, the interpolating currents for baryon states could couple to both positive and negative parity states, thus currents in Eq. (<ref>) could couple to J^P=1/2^±,3/2^± or 5/2^± pentaquarks. For instance, the current η_1 would couple to both Ξ_c^+D^+ states with J^P=1/2^-, and Ξ_c^+D^+ states with J^P=1/2^+ in P-wave. We will further discuss such an issue in the next section. §.§ Currents for the double heavy baryon-light meson molecular pentaquark We introduce currents with color configuration ϵ_abc[Q_aQ_bu_c][d̅_ds_d] and ϵ_abc[Q_aQ_bs_c][d̅_du_d] coupling to Ξ_cc^++K^0(*) and Ω_cc^++π^0(ρ^0) molecular states, respectively. In previous QCD sum rules analysis <cit.>, one suggests that the currents ϵ_abc(c^T_aCγ_μ c_b)γ_μγ_5u_c, 1/√(3)ϵ_abc[2(u^T_aCγ_μ c_b)γ_5c_c+(c^T_aCγ_μ c_b)γ_5u_c], could well couple to Ξ_cc^++ and Ξ_cc^*++ states with J^P=1/2^+,3/2^+, respectively. Thus we can construct the following pentaquark currents to perform QCD sum rules analysis: ξ_1 = [ϵ_abc(Q^T_aCγ_μ Q_b)γ_μγ_5u_c][d̅_dγ_5s_d], ξ_2μ = [ϵ_abc(Q^T_aCγ_ν Q_b)γ_νγ_5u_c][d̅_dγ_μ s_d], ξ_3μ = 1/√(3)ϵ_abc[2(u^T_aCγ_μ Q_b)γ_5Q_c+(Q^T_aCγ_μ Q_b)γ_5u_c][d̅_dγ_5s_d], ξ_4 = 1/√(3)ϵ_abc[2(u^T_aCγ_μ Q_b)γ_5Q_c+(Q^T_aCγ_μ Q_b)γ_5u_c][d̅_dγ_μ s_d], ξ_5,μν = 1/√(3)ϵ_abc[2(u^T_aCγ_μ Q_b)γ_5Q_c+(Q^T_aCγ_μ Q_b)γ_5u_c][d̅_dγ_ν s_d]+(μ↔ν), where ξ_i should be the HADS partner of open heavy tetraquark [uc̅]_0(1)[sd̅]_0(1) due to its spin-1 [cc] diquark component and spin-0(1) [sd̅] component. The interpolating currents for configuration ϵ_abc[Q_aQ_bs_c][d̅_du_d] are the same as ϵ_abc[Q_aQ_bu_c][d̅_ds_d] with substitution u↔ s and we denote them as ψ_i: ψ_i=ξ_i (u↔ s), where ψ_i should be the HADS partner of open heavy tetraquark [sc̅]_0(1)[ud̅]_0(1) due to its spin-1 [cc] diquark component and spin-0(1) [ud̅] component. §.§ Currents for compact pentaquark The double heavy pentaquarks with compact structure can be treated by an intuitive picture, in which the heavier component forms a nucleus and the lighter one is in an orbit around this nucleus. Such a picture for compact pentaquark has color configuration [[cc]_3̅d̅_3̅]_3[us]_3̅, where the heavy diquark forms a color triplet spin-1 state, as suggested previously. Since the [cc] diquark with spin-0 violates the Fermi-Dirac statistic, we only construct the pentaquark with the [cc] diquark with spin-1. To compare with this compact picture, we also consider a more complicated picture, i.e. one heavy quark and one light quark form one color antitriplet diquark and the two heavy-light diquarks combine the other light antiquark to form a color singlet pentaquark. Such a picture has color configuration [[cu]_3̅[cs]_3̅]_3d̅_3̅. In this work, we suggest currents with color configuration ϵ^aijϵ^bklϵ^abc[Q_iu_j][Q_ks_l]d̅_c and ϵ^aijϵ^bklϵ^abc[Q_iQ_j][u_ks_l]d̅_c coupling to the two compact pentaquark states above, and we use the following interpolating currents to perform our QCD sum rule analysis: J_1,2 = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_μ Q_j)(u_k^TCγ_μ s_l)γ_5Cd̅_c^T, J_1,3 = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_μ Q_j)(u_k^TCγ_5s_l)γ_μ Cd̅_c^T, J_1,5μ = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_μ Q_j)(u_k^TCγ_5 s_l)γ_5 Cd̅_c^T, J_1,8μ = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_ν Q_j)(u_k^TCγ_ν s_l)γ_μ Cd̅_c^T, J_1,9μν = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_μ Q_j)(u_k^TCγ_ν s_l)γ_5 Cd̅_c^T+(μ↔ν), and J_2,1 = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_5u_j)(Q_k^TCγ_5s_l)γ_5Cd̅_c^T, J_2,2 = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_μ u_j)(Q_k^TCγ_μ s_l)γ_5Cd̅_c^T, J_2,3 = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_μ u_j)(Q_k^TCγ_5s_l)γ_μ Cd̅_c^T, J_2,4 = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_5u_j)(Q_k^TCγ_μ s_l)γ_μ Cd̅_c^T, J_2,5μ = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_μ u_j)(Q_k^TCγ_5 s_l)γ_5 Cd̅_c^T, J_2,6μ = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_5u_j)(Q_k^TCγ_μ s_l)γ_5 Cd̅_c^T, J_2,7μ = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_5u_j)(Q_k^TCγ_5 s_l)γ_μ Cd̅_c^T, J_2,8μν = ϵ_aijϵ_bklϵ_abc(Q_i^TCγ_μ u_j)(Q_k^TCγ_ν s_l)γ_5 Cd̅_c^T+(μ↔ν), where currents J_1,i are heavy diquark coupled and J_2,i are heavy diquark decoupled. For convenience, we call these two types of currrents Type-I and Type-II. The currents J_1,2,J_1,3,J_1,5μ,J_1,8μ,J_1,9μν should couple to the HADS partners of open heavy tetraquark [usd̅c̅] due to their spin-1 [cc] diquark components. § QCD SUM RULES In this section, we shall investigate the currents using the method of QCD sum rules. Symbols J, J_μ and J_μν are assigned to denote the currents with spin J=1/2,3/2,5/2. The two-point correlation functions obtained by the currents can be written as <cit.> Π(q^2) = i∫ d^4x e^iq· x⟨ 0|T[J(x)J̅(0)]|0⟩ = (q+M_X)Π^1/2(q^2), Π_μν(q^2) = i∫ d^4x e^iq· x⟨ 0|T[J_μ(x)J̅_ν(0)]|0⟩ = (q_μ q_ν/q^2-g_μν)(q+M_X)Π^3/2(q^2)+⋯, Π_μναβ(q^2) = i∫ d^4x e^iq· x⟨ 0|T[J_μν(x)J̅_αβ(0)]|0⟩ = (g_μαg_νβ+g_μβg_να)(q+M_X)Π^5/2(q^2)+⋯. where ⋯ contains the other coupling states, M_X denotes the mass of physical state X. In this work, we will only use the structures 1, g_μν and g_μαg_νβ+g_μβg_να for the correlation functions Π(p^2), Π_μν(p^2) and Π_μναβ(p^2) respectively to study the J^P=1/2^-,3/2^- and 5/2^- double charm pentaquark states. We assume that the current couples to the physical state X through ⟨ 0|J|X_1/2⟩ =f_Xu(p), ⟨ 0|J_μ|X_3/2⟩ =f_Xu_μ(p), ⟨ 0|J_μν|X_5/2⟩ =f_Xu_μν(p), where f_X denotes the coupling constant, u(p) denotes the Dirac spinor and u_μ(p),u_μν(p) denotes the Rarita-Schwinger vector and tensor, respectively. For the convenience of discussing the parity of hadron currents, we assumed that the hadron state X has the same parity as its current J, and used the non-γ_5 coupling relation in (<ref>). Meanwhile, the γ_5 coupling relation also exists ⟨ 0|J|X'_1/2⟩ =f_Xγ_5u(p), ⟨ 0|J_μ|X'_3/2⟩ =f_Xγ_5u_μ(p), ⟨ 0|J_μν|X'_5/2⟩ =f_Xγ_5u_μν(p), where X' has the opposite parity of X. Eqs.(<ref>),(<ref>) indicate the fact that two states with opposite parity could couple to the same current . These relations also suggest that the current j≡γ_5 J with opposite parity can couple to the state X. In the following discussion we will denote the currents with positive parity as J and the currents with negative parity as j. The parity issue will be further discussed at the end of this section. At the hadron level, two-point correlation function can be written as Π(q^2)=1/π∫^∞_s_<ImΠ(s)/s-q^2-iϵds, where we have used the form of the dispersion relation, and s_< denotes the physical threshold. The imaginary part of the correlation function is defined as the spectral function, which is usually evaluated at the hadron level by inserting intermediate hadron states ∑_n|n⟩⟨ n| ρ(s)≡1/πImΠ(s) =∑_nδ(s-M^2_n)⟨ 0|J|n⟩⟨ n|J̅|0⟩ =f^-2_X(p+m_X^-)δ(s-m^-2_X)+f^+2_X(p-m_X^+)δ(s-m^+2_X)+continuum. The spectral density ρ(s) can also be evaluated at the quark-gluon level via the operator product expansion(OPE). After performing the Borel transform at both the hadron and quark-gluon levels, the two-point correlation function can be expressed as Π(M^2_B)≡ℬ_M^2_BΠ(p^2)=∫^∞_s_<e^-s/M^2_Bρ(s)ds. Finally, we assume that the contribution from the continuum states can be approximated well by the OPE spectral density above a threshold value s_0 (duality), and arrive at the sum rule relation which can be used to perform numerical analysis: M^2_X(s_0,M_B)=∫^s_0_s_<e^-s/M^2_Bρ(s)sds/∫^s_0_s_<e^-s/M^2_Bρ(s)ds. To further discuss the parity of the hadron states, we assume that the correlation function of J is given by Π_+(p^2)=pΠ_1(p^2)+Π_2(p^2), each scalar functions Π_1,2(p^2) in the equation above can construct a sum rule with (<ref>) separately, meanwhile, the correlation function of j can be written as Π_-(p^2)=pΠ_1(p^2)-Π_2(p^2). The difference between the correlation function of J and j appears only in the sign in front of Π_2(p^2) due to the γ_5 coupling. Thus, the same functions Π_1 and Π_2 appear in Π_+ and Π_- bring us no independent sum rule from j. Here we use the method of parity projected sum rule to obtain two independent sum rules with different parity<cit.><cit.>. In the zero-width resonance approximation, the imaginary part of the correlation function in the rest frame p=0 is considered as ImΠ(p_0)/π =∑_n[(λ^+_n)^2γ_0+1/2δ(p_0-m^+_n)+(λ^-_n)^2γ_0-1/2δ(p_0-m^-_n)] ≡γ_0p_0ρ_A(p_0)+ρ_B(p_0), where λ^± are coupling constants and m^± denote the mass of positive or negative parity state. ρ_A(p_0),ρ_B(p_0) are defined by p_0ρ_A(p_0) ≡ 1/2∑_n[(λ^+_n)^2δ(p_0-m^+_n)+(λ^-_n)^2δ(p_0-m^-_n)], ρ_B(p_0) ≡ 1/2∑_n[(λ^+_n)^2δ(p_0-m^+_n)-(λ^-_n)^2δ(p_0-m^-_n)]. The combination p_0ρ_A(p_0)+ρ_B(p_0) and p_0ρ_A(p_0)-ρ_B(p_0) contain contributions only from the positive- or negative-parity states obviously, thus we can establish the corresponding parity projected sum rules: ℒ_k(s_0^+,M_B^2,+) ≡ 1/2∫^s_0^+_s_<e^-s/M^2_B[√(s)ρ_A^OPE(s)+ρ_B^OPE(s)]s^kds=λ_+^2m_+^2k+1exp[-m_+^2/M^2_B], ℒ_k(s_0^-,M_B^2,-) ≡ 1/2∫^s_0^-_s_<e^-s/M^2_B[√(s)ρ_A^OPE(s)-ρ_B^OPE(s)]s^kds=λ_-^2m_-^2k+1exp[-m_-^2/M^2_B]. where s_0^± denote the threshold of positive or negative parity state. We can extract the mass for positive and negative parity states by m_±(s_0^±,M_B)=√(ℒ_1(s_0^±,M_B^2,±)/ℒ_0(s_0^±,M_B^2,±)). We shall discuss the detail to obtain suitable parameter working regions in QCD sum rule analysis in next section. Using the operator product expansion (OPE) method, the two-point function can also be evaluated at the quark-gluonic level as a function of various QCD parameters. To evaluate the Wilson coefficients, we adopt the quark propagator in momentum space and the propagator i S_Q^a b(p) = i δ^a b/p-m_Q +i/4 g_sλ_a b^n/2 G_μν^nσ^μν(p+m_Q)+(p+m_Q) σ^μν/(p^2-m_Q^2)^2 +i δ^a b/12⟨ g_s^2 G G⟩ m_Qp^2+m_Qp/(p^2-m_Q^2)^4, i S_q^ab(x) = iδ^ab/2π^2x^4x-1/12⟨q̅q⟩+i/32π^2λ^n_ab/2g_sG^n_μν1/x^2(σ^μνx+xσ^μν) +δ^abx^2/192⟨q̅g_sσ· Gq⟩-m_qδ^ab/4π^2x^2+iδ^abm_q⟨q̅q⟩/48x-im_q⟨q̅g_sσ· Gq⟩δ^abx^2x/1152, where Q represents the heavy quark c or b, q represents the light quark u,d,s, the superscripts a, b denote the color indices. In this work, we will evaluate Wilson coefficients of the correlation function up to dimension nine condensates at the leading order in α_s. § NUMERICAL ANALYSIS In this section we perform the QCD sum rule analysis for double heavy molecular pentaquark systems using the interpolating currents in Eqs. (<ref>),(<ref>)-(<ref>). We use the standard values of various QCD condensates as ⟨q̅q⟩(1GeV)=-(0.24±0.03)^3 GeV^3, ⟨q̅g_sσ· Gq⟩(1GeV)=-M_0^2⟨q̅q⟩, M_0^2=(0.8±0.2) GeV^2, ⟨s̅s⟩/⟨q̅q⟩=0.8±0.1, ⟨ g_s^2GG⟩(1GeV)=(0.48±0.14) GeV^4 at the energy scale μ=1GeV <cit.> and m_s(2 GeV)=95^+9_-3 MeV, m_c(m_c)=1.27^+0.03_-0.04 GeV, m_b(m_b)=4.18_-0.03^+0.04 GeV from the Particle Data Group<cit.>. We also take into account the energy-scale dependence of the above parameters from the renormalization group equation m_s(μ)=m_s(2GeV)[α_s(μ)/α_s(2GeV)]^12/33-2n_f, m_c(μ)=m_c(m_c)[α_s(μ)/α_s(m_c)]^12/33-2n_f, m_b(m_b)=m_b(m_b)[α_s(μ)/α_s(m_b)]^12/33-2n_f, ⟨q̅q⟩(μ)=⟨q̅q⟩(1GeV)[α_s(1GeV)/α_s(μ)]^12/33-2n_f, ⟨s̅s⟩(μ)=⟨s̅s⟩(1GeV)[α_s(1GeV)/α_s(μ)]^12/33-2n_f, ⟨q̅g_sσ· Gq⟩(μ)=⟨q̅g_sσ· Gq⟩(1GeV)[α_s(1GeV)/α_s(μ)]^2/33-2n_f, ⟨s̅g_sσ· Gs⟩(μ)=⟨s̅g_sσ· Gs⟩(1GeV)[α_s(1GeV)/α_s(μ)]^2/33-2n_f, α_s(μ)=1/b_0t[1-b_1/b_0logt/t+b_1^2(log^2t-logt-1)+b_0b_2/b_0^4t^2], where t=logμ^2/Λ^2, b_0=33-2n_f/12π, b_1=153-19n_f/24π^2, b_2=2857-5033/9n_f+325/27n_f^2/128π^3, Λ=210 MeV, 292 MeV and 332 MeV for the flavors n_f=5, 4 and 3, respectively. In this work, we evolve all the input parameters to the energy scale μ=2m_c for our sum rule analysis. To establish a stable mass sum rule, one should find the appropriate parameter working regions at first, i.e, for the continuum threshold s_0 and the Borel mass M_B^2. The threshold s_0 can be determined via the minimized variation of the hadronic mass m_X with respect to the Borel mass M_B^2. The lower bound on the Borel mass M_B^2 can be fixed by requiring a reasonable OPE convergence, while its upper bound is determined through a sufficient pole contribution. The pole contribution (PC) is defined as PC(s_0^±,M_B^2,±)=ℒ_0(s_0^±,M_B^2,±)/ℒ_0(∞,M_B^2,±), where ℒ_0 has been defined in Eq. (<ref>). As an example, we use the current J_1,2(x) with J^P=1/2^- in c sector to show the details of the numerical analysis. As we mainly focus on the negative parity states in this work, we will omit the parity superscript without ambiguity in the following discussion. For this current, the dominant non-perturbative contribution to the correlation function comes from the quark condensate, which is proportional to the charm quark mass m_c. In Fig. <ref>, we show the contributions of the perturbative term and various condensate terms to the correlation function with respect to M_B^2 when s_0 tends to infinity. It is clear that the Borel mass M_B^2 should be large enough to ensure the convergence of the OPE series. In this work, we require that the dimension-9 term makes at least 1% contribution, which is CVG(M_B^2,±)=ℒ_0^d=9(∞,M_B^2,±)/ℒ_0(∞,M_B^2,±)≤1%, providing the lower bound of the Borel mass M_B^2≥2.15 GeV^2. After studying the pole contribution defined in Eq. (<ref>), one finds that the PC is small for pentaquark system due to the high dimension of the interpolating current. To find an upper bound of the Borel mass, we require that the pole contribution to be larger than 10%. As the right panel of Fig. <ref> shown, the upper bound of the Borel mass is determined as 4.47 GeV^2. With the guarantee of the validity of OPE, we choose a Borel window with a width of 1 GeV^2. As a result, the reasonable Borel window for the current J_1,2(x) is obtained as 3.47 GeV^2≤ M_B^2≤4.47 GeV^2. As mentioned above, the variation of the extracted hadron mass m with respect to M_B^2 should be minimized to obtain the optimal value of the continuum threshold s_0. We define the following hadron mass m̅_X and quantity χ^2(s_0) to study the stability of mass sum rules m̅(s_0,±) = ∑^N_i=1m(s_0,M_B,i^2,±)/N, χ^2(s_0,±) = ∑^N_i=1[m(s_0,M_B,i^2,±)/m̅(s_0,±)-1]^2, where the M_B,i^2(i=1,2,…,N) represent N definite values for the Borel parameter M_B^2 in the Borel window. According to the above definition, the optimal choice for the continuum threshold s_0 in the QCD sum rule analysis can be obtained by minimizing the quantity χ^2(s_0), which is only the function of s_0. In this example, there is a minimum point around s_0≈22.3 GeV^2 in χ^2 function and we show the variation of m with s_0 in the left panel of Fig. <ref>, from which we can find that the optimized value of the continuum threshold can be chosen as s_0≈ 22.3 GeV^2 indeed. In the right panel of Fig. <ref>, the mass sum rules are established to be very stable in the above parameter regions of s_0 and M_B^2. The hadron mass for this molecular pentaquark with J^P=1/2^- can be obtained as m_J_1,2=4.38_-0.05^+0.04 GeV , where the errors come from the uncertainties of the threshold s_0, Borel mass M_B^2, quark masses and various QCD condensates. Performing the same numerical analysis to all interpolating currents in Eqs. (<ref>),(<ref>)-(<ref>), we collect their numerical results with stable sum rule analysis in Table <ref>-<ref>. Furthermore, we consider the dependence of the mass of the double heavy pentaquark on the mass of the heavy quark by varying the heavy quark mass to perform the sum rules analysis. In heavy quark spin symmetry, the mass of heavy hadron can be written as follows <cit.>, m_P_QQ=2m_Q+Λ̅+Δ m^2/4 m_Q+O(1/m_Q^2), where Λ̅ denotes the contribution independent with heavy quark mass and spin. Δ m^2 denotes the contribution from heavy quark spin symmetry breaking of the order 1/m_Q. We choose 10 testing points with masses equidistant from m_c to m_b, and fit our results using m_P_QQ=2 m_Q+b+c/m_Q. For example, we show the dependence of the pentaquark mass on the heavy quark mass from current J_1,2 in Fig. <ref>. We collect all the fitting parameters in Eq. (<ref>) and the mass of the bottom partner for currents we dealt with in Table <ref>-<ref>. § DISCUSSION AND CONCLUSION We have investigated the mass spectra for [Ξ_c^(*)+D^(*)+], [Ξ_cc^(*)+K^(*)+], [Ω_cc^(*)+π^+(ρ^+)] molecular pentaquark states and [cu][cs]d̅ ,[cc][us]d̅ compact pentaquark states in the framework of QCD sum rule. We construct the interpolating pentaquark currents and calculate their two-point correlation functions including perturbative term and various condensate terms. With appropriate Borel mass and threshold, we obtain stable sum rule for some currents and extract the corresponding mass spectra listed in Table <ref> and Table <ref>, as well as those in Fig <ref>. In Fig. <ref>, the two-hadron thresholds are also plotted to illustrate whether the pentaquarks are stable or not. Here the Ξ_cc and Ω_cc masses are taken from the Lattice calculation in Refs.<cit.>. We also list their possible strong decay mode in Table <ref>, where we denote the negative parity baryon state H with spin-J as H(J^-). The production for double charm pentaquark P_cc has already been discussed in Ref.<cit.>. The production mechanism of strange double charm pentaquark P_ccs is similar, i.e. via the weak decay of double heavy baryon Ξ_bc or triply charm baryon Ω_ccc. The masses of strange double charm pentaquarks for the molecular and compact pentaquark currents are listed in Table <ref> and Table <ref>, respectively. In the two tables, only the quantum numbers J^P=1/2^-,3/2^-,5/2^- are considered, as these quantum numbers can be achieved by the considered two-hadron channel in S-wave. The mass region for these three quantum numbers are 4.2-4.6 GeV, 4.2-4.5 GeV and 4.3 GeV respectively. One should notice that the masses of the currents Ξ_c^+D^∗ +, Ξ_c^'+D^∗ +, Ξ_c^∗ +D^∗ +, Ξ_cc^∗ ++K^∗ 0 and Ω_cc^∗ ++ρ^0 are below their corresponding threshold, indicating that these strange double charm pentaquarks are stable against their strong decay channels. On the contrary, those for the Ξ_cc^++K^0, Ξ_cc^++K^∗ 0, Ξ_cc^∗ ++K^0, Ω_cc^++ρ^0 and Ω_cc^∗ ++π^0 channels are around or above their corresponding thresholds, indicating that they could be broader states and not easy to be detected in experiment. In comparison with other works in the market, we plot Fig. <ref>. In the figure, we also plot the thresholds of the potential strong decay channels for each quantum number. We mainly compare our results with those of Refs. <cit.>. In Ref. <cit.>, the authors employ a color-magnetic interaction in Schrödinger equation and obtain the mass spectra by variational method. Their masses, namely the mass for J^P=1/2^-,3/2^-,5/2^- are around 4.0-4.8 GeV, 4.1-4.8 GeV and 4.7 GeV respectively, are higher than ours. There are two reasons. One is that they mainly focus on the mass splitting of the pentaquark states and the accurate values need further dynamical calculations, as claimed by the authors in their work. Thus the wider range of mass in Ref. <cit.> is probably due to the overlooked dynamical calculation. Another reason is that they use the variational method, which is well known that it can only gives the upper limit of a given state. Similarly, in Ref.<cit.>, the authors analyse the pentaquark q^4Q̅ system in a constituent quark model based on the chromomagnetic interaction in both the SU(3) flavor symmetric and SU(3) flavor broken case. They find that the Ξ_ccK could be stable pentaquark state. In Ref. <cit.>, the authors study the double-heavy pentaquarks in non-relativistic constituent quark model by solving the multi-body Schrödinger equation including the color Coulomb interaction and spin-dependent interaction. They conclude that the mass about 4.4-4.5 GeV for compact pentaquark. Such results are slightly higher than our results, which may possibly due to the overlooked linear confinement potential and induce two free parameters β and the distance between the two heavy quarks R. The authors suggest that the optimal value of R=1 fm for double heavy pentaquark, which may cause a higher mass spectra. Another potential reason is that they use variational method as discussed above. In Ref. <cit.>, the authors employ the double heavy diquark-triquark model including the interaction between triquark and heavy diquark [cc]_3̅ and the interaction in the light diquark [qq^']_3̅. They obtain the mass for ccn̅sn states with J^P=1/2^- and 3/2^- as 4.1±0.3 GeV and 4.6±0.3 GeV, which is consistent with ours. In Ref. <cit.>, the authors study the heavy-heavy hadronic molecules by solving the single channel Bethe-Salpeter equation with the interactions following the heavy quark spin symmetry, including the interactions from light vector meson exchange. Their results illustrate that D^(*)Ξ_c^('*) system can be bound easily, which is consistent with our results. It is interesting to find that we obtain almost degenerate mass for all the currents with J^P=5/2^-, as shown in Fig. <ref>. This is because that all these currents contain spin 1 cc diquark, leaving potential small splitting from light quarks. In addition, one can also see that the mass is far below the thresholds of corresponding strong decay channels and can be viewed as a stable and narrow state. As the result, we consider the existence of this state is a very solid conclusion of our work. The best observed channel is its semi-leptonic decay to double charm baryon, i.e. Ξ_cc^*++ and Ω_cc^*++. We consider the dependency of the pentaquark mass P_QQ on the heavy quark mass m_Q and use Eq. (<ref>) to fit our results. The fitting parameters and the predicted double bottom pentaquarks masses are listed in Table <ref>-<ref>. The masses of double bottom pentaquarks are around 9.6-10.6 GeV, 10.0-10.2 GeV and 10.0 GeV for J^P=1/2^-,3/2^-,5/2^- respectively. Except that the two states with J^P=1/2^- are higher than the lowest two-hadron threshold Ω_bbπ (10.33 GeV), all the other double bottom pentaquark states are lower than their corresponding lowest two-hadron thresholds and can be viewed as narrow states. Here the masses of double bottom baryons are taken from the result of Lattice QCD in Ref. <cit.>. From the two tables, one can also see the heavy quark spin symmetry emerging in the spectrum. As we know, spin-interaction of heavy quarks does not occur at the leading order in the Λ_QCD/m_Q expansion, which makes the masses of pentaquarks with the same light quark spin are degenerate. Such symmetry can be seen in our results, namely J_1,2, J_1,8μ and J_1,9μν with spin-1 [us] diquark component give a degenerate mass at 4.38 GeV. Such behavior can also be seen in the molecular structure currents ξ_1, ξ_3μ in the Ξ_cc^++K^0 structure and ψ_4, ψ_5μν in the Ω_cc^++ρ^0 structure. Comparing Eq. (<ref>) with Eq. (<ref>), the parameter b in Eq. (<ref>) should be the parameter Λ̅ in Eq. (<ref>) and is independent on heavy quark mass and spin. This feature can be reflected by the currents (ψ_1,ψ_3μ), (J_1,3,J_1,5μ) and (J_1,2,J_1,8μ,J_1,9μν). They have the same light quark spin structure, leaving almost the same parameter b. As we discussed in Sec. <ref>, the double charm pentaquark ccusd̅ should be the HADS partner of the singly charm tetraquark usd̅c̅, we can derive the corresponding mass spectra of singly charm tetraquark T_cs̅ through Eq. (<ref>) by replacing 2m_Q to m_Q, and we list these corresponding spectra and their spin-parity in Table <ref>-<ref>. The mass and spin-parity of HADS partner for current ξ_1 is consistent with the recently discovered T_cs̅(2900), which indicates that T_cs̅(2900) could be molecular tetraquark with spin-0 [sd̅] meson component. Furthermore, current (J_1,2, J_1,8μ, J_1,9μν) could be the HADS partner triplet for tetraquark [us]_1[c̅d̅]_1 with mass about 3.1 GeV, current (J_1,3, J_1,5μ) or (ξ_1, ξ_3μ) could be the HADS partner doublet for tetraquark T_cs̅(2900). With the spectra and decay modes in this work, we hope that these double charm and double bottom pentaquarks could be discovered by the LHCb, BelleII, CMS and RHIC collaborations and so on in the near future. § SUMMARY Motivated by the observation of the T_cs̅(2900), we study the mass spectrum of its HADS counter parts, i.e. strange double charm pentaquarks. By constructing currents in the molecular and compact pentaquark pictures, we extract the corresponding mass spectra for quantum numbers J^P=1/2^-, 3/2^-, 5/2^-. The masses for the former two quantum numbers are within the energy region 4.2-4.6 GeV and 4.2-4.5 GeV, respectively. The masses of the three currents (two molecular currents and one compact pentquark current) of the quantum number J^P=5/2^- are degenerate and locate at 4.3 GeV. It is far below the threshold of its two-hadron strong decay channel and can be viewed as a narrow state, making it easily to be measured in experiment. The best observed channel is the semi-leptonic decay to double charm baryon. The corresponding strange double bottom pentaquarks are locate at 9.6-10.6 GeV, 10.0-10.2 GeV and 10.0 GeV for the above mentioned three quantum numbers. This kind of study is useful for the further measurements of strange double charm and double bottom pentaquarks. § ACKNOWLEDGMENTS This work is partly supported by the National Natural Science Foundation of China with Grant Nos. 12375073, 12035007, and 12175318, Guangdong Provincial funding with Grant Nos. 2019QN01X172, Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, the Natural Science Foundation of Guangdong Province of China under Grant No. 2022A1515011922, the NSFC and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the funds provided to the Sino-German Collaborative Research Center TRR110 “Symmetries and the Emergence of Structure in QCD" (NSFC Grant No. 12070131001, DFG Project-ID 196253076-TRR 110). 100 Gell-Mann:1964ewy M.  Gell-Mann, Phys. Lett. 8, 214 (1964) 1964-Zweig-p- G. Zweig, in: D.Lichtenberg, S.P.Rosen(Eds.), Developments in the Quark Theory of Hadrons, VOL. 1. 1964 - 1978:pp. 22–101, 1964. ParticleDataGroup:2022pth R. L. Workman et al. [Particle Data Group], PTEP 2022, 083C01 (2022) Nielsen:2009uh M. Nielsen,F.S. Navarra,S.H. and Lee, Phys. Rept. 497, 41 (2010) Chen:2016qju H. X. Chen, W. Chen, X. Liu and S. L. Zhu, Phys. Rept. 639, 1-121 (2016) Richard:2016eis J.-M. Richard, Few Body Syst. 57, 1185 (2016) Esposito:2016noz A. Esposito, A. Pilloni and A. D. Polosa, Phys. Rept. 668, 1-97 (2017) Ali:2017jda A. Ali, J.S.  Lange, and S. Stone, Prog. Part. Nucl. Phys. 97, 123 (2017) Guo:2017jvc F. K. Guo, C. Hanhart, U. G. Meißner, Q. Wang, Q. Zhao and B. S. Zou, Rev. Mod. Phys. 90, no.1, 015004 (2018) Albuquerque:2018jkn R.M. Albuquerque, J.M.  Dias, K.P.  Khemchandani, A.M.  Torres, F.S.  Navarra, M.  Nielsen, and C.M.  Zanetti, J. Phys. G 46, 093002 (2019) Liu:2019zoy Y. R. Liu, H. X. Chen, W. Chen, X. Liu and S. L. Zhu, Prog. Part. Nucl. Phys. 107, 237-320 (2019) Brambilla:2019esw N. Brambilla, S. Eidelman, C. Hanhart, A. Nefediev, C. P. Shen, C. E. Thomas, A. Vairo and C. Z. Yuan, Phys. Rept. 873, 1-154 (2020) Richard:2019cmi J.-M. Richard, A.  Valcarce, and J. Vijande, Annals Phys. 412, 168009 (2020) Faustov:2021hjs R.N. Faustov, V.O.  Galkin, and E.M. Savchenko, Universe 7, 94 (2021) Chen:2022asf H. X. Chen, W. Chen, X. Liu, Y. R. Liu and S. L. Zhu, Rept. Prog. Phys. 86, 026201 (2023) Meng:2022ozq L. Meng, B. Wang, G. J. Wang and S. L. Zhu, Phys.Rept. 1019 1-149 (2023) LHCb:2015yax R. Aaij et al. [LHCb], Phys. Rev. Lett. 115, 072001 (2015) LHCb:2019kea R. Aaij et al. [LHCb], Phys. Rev. Lett. 122, no.22, 222001 (2019) LHCb:2020jpq R. Aaij et al. [LHCb], Sci. Bull. 66, 1278-1287 (2021) LHCb:2022ogu R. Aaij et al. [LHCb], Phys. Rev. Lett. 131, 031901 (2022) LHCb:2021vvq R. Aaij et al. [LHCb], Nature Phys. 18, no.7, 751-754 (2022) LHCb:2021auc R. Aaij et al. [LHCb], Nature Commun. 13, no.1, 3351 (2022) Dong:2021bvy X.-K. Dong, F.-K. Guo, and B.-S. Zou, Commun. Theor. Phys. 73, no.12, 125201 (2021) Chen:2017vai R. Chen, A. Hosaka, and X. Liu, Phys. Rev. D 96, 116012 (2017) Guo:2017vcf Z.-H. Guo, Phys. Rev. D 96, 074004 (2017) Zhu:2019iwm R. Zhu, X. Liu, H. Huang, and C.-F. Qiao, Phys. Lett. B 797, 134869 (2019) Chen:2021kad R. Chen, N. Li, Z.-F. Sun, X. Liu, and S.-L. Zhu, Phys. Lett. B 822, 136693 (2021) Xing:2021yid Y. Xing, and Y. Niu, Eur. Phys. J. C 81, 978 (2021) Zhou:2018bkn Q.-S. Zhou, K. Chen, X. Liu, Y.-R. Liu, and S.-L. Zhu, Phys. Rev. C 98, 045204 (2018) Wang:2018lhz Z.-G. Wang, Eur. Phys. J. C 78, 826 (2018) Park:2018oib W. Park, S. Cho, and S. H. Lee, Phys. Rev. D 99, 094023 (2019) Ozdem:2022vip U. Özdem, Eur. Phys. J. Plus 137, 936 (2022) Duan:2024uuf F.-B. Duan, Q.-N. Wang, Z.-Y. Yang, X.-L. Chen, and W. Chen, Phys. Rev. D 109, 094018 (2024) LHCb:2017iph R. Aaij et al. [LHCb], Phys. Rev. Lett. 119, no.11, 112001 (2017) LHCb:2018pcs R. Aaij et al. [LHCb], Phys. Rev. Lett. 121, no.16, 162002 (2018) Savage:1990di M.J. Savage, and M.B. Wise, Phys. Lett. B 248, 177 (1990) LHCb:2022lzp R. Aaij et al. [LHCb], Phys.Rev.D 108, 012017 (2023) Eichten:1987xu E. Eichten, Nucl. Phys. B Proc. Suppl. 4, 170 (1988) Lepage:1987gg G. P. Lepage and B. A. Thacker, Nucl. Phys. B Proc. Suppl. 4, 199 (1988) Reinders:1984sr L. J. Reinders, H. Rubinstein, and S. Yazaki, Phys. Rep. 127, 1 (1985) Shifman:1978bx M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, Nucl. Phys. B147, 385 (1979) Zhang:2009iya J.-R. Zhang, and M.-Q. Huang, Chin. Phys. C 33, 1385 (2009) Wang:2015epa Z.-G. Wang, Eur. Phys. J. C 76, 70 (2016) Jido:1996ia D. Jido, N. Kodama, and M. Oka, Phys. Rev. D 54, 4532 (1996) Ohtani:2012ps K. Ohtani, P. Gubler, and M. Oka, Phys. Rev. D 87, 034027 (2013) Narison:1989aq S. Narison, QCD spectral sum rules, volume 26 (1989) Jamin:2001zr M. Jamin, J. A. Oller, and A. Pich, Eur. Phys. J. C 24, 237 (2002) Jamin:1998ra M. Jamin and A. Pich, Nucl. Phys. B Proc. Suppl. 74, 300 (1999) Ioffe:1981kw B. L. Ioffe, Nucl. Phys. B188, 317 (1981), [Erratum: Nucl.Phys.B 191, 591–592 (1981)] Chung:1984gr Y. Chung, H. G. Dosch, M. Kremer, and D. Schall, Z. Phys. C 25, 151 (1984) Dosch:1988vv H. G. Dosch, M. Jamin, and S. Narison, Phys. Lett. B 220, 251 (1989) Khodjamirian:2011ub A. Khodjamirian, T. Mannel, N. Offen, and Y. M. Wang, Phys. Rev. D 83, 094031 (2011) Francis:2018jyb A. Francis, R. J. Hudspith, R. Lewis, and K. Maltman, Phys. Rev. D 99, 054505 (2019) Neubert:1993mb M. Neubert, Phys. Rept. 245, 259 (1994) Luke:1990eg M. E. Luke, Phys. Lett. B 252, 447 (1990) Falk:1992fm A. F. Falk, M. Neubert, and M. Luke, Nucl. Phys. B 388,363 (1992) Perez-Rubio:2015zqb P. Pérez-Rubio, S. Collins, and G. S. Bali, Phys. Rev. D 92, 034504 (2015) Mohanta:2019mxo P. Mohanta,and S. Basak, Phys. Rev. D 101, 094503 (2020)
http://arxiv.org/abs/2405.08650v1
20240514142852
An explicit economical additive basis
[ "Vishesh Jain", "Huy Tuan Pham", "Mehtaab Sawhney", "Dmitrii Zakharov" ]
math.CO
[ "math.CO" ]
We present an explicit subset A⊆N = {0,1,…} such that A + A = N and for all > 0, lim_N→∞|{(n_1,n_2): n_1 + n_2 = N, (n_1,n_2)∈ A^2}|/N^ = 0. This answers a question of Erdős. BeACONS: A Blockchain-enabled Authentication and Communications Network for Scalable IoV Qi Shi, Jingyi Sun, Hanwei Fu, Peizhe Fu, Jiayuan Ma, Hao Xu and Erwu Liu Q. Shi, J. Sun, H. Fu, P. Fu and J. Ma are with College of Electronic and Information Engineering, Tongji University, Shanghai 201804, PR China, E-mail: {qishi, 2252086, 2251039, 2252719, 2050971}@tongji.edu.cn; H. Xu and E. Liu are with College of Electronic and Information Engineering and Shanghai Engineering Research Center for Blockchain Applications And Services, Tongji University, Shanghai, China, E-mail: {hao.xu, erwu.liu}@ieee.org. 14th May 2024 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Sidon asked (see <cit.>) whether there exists a set A⊆N such that A + A = N (i.e. A is an additive basis of order 2) and for all > 0, lim_N→∞|{(n_1,n_2): n_1 + n_2 = N, (n_1,n_2)∈ A^2}|/N^ = 0. Erdős <cit.> answered Sidon's question by showing that there exists an additive basis of order 2 which, in fact, satisfies the stronger bound lim sup_N→∞|{(n_1,n_2): n_1 + n_2 = N, (n_1,n_2)∈ A^2}|/log N <∞. It is a major open problem whether there exists an additive basis of order 2 for which the factor of logN in the denominator can be replaced by an absolute constant; Erdős and Turán <cit.> famously conjectured that this is impossible. Erdős's proof of the existence of A is randomized; in modern notation, one simply includes the number n in the set A with probability proportional to C(log n)^1/2n^-1/2. Kolountzakis <cit.> derandomized (a variation of) Erdős's proof in the sense that one can deterministically generate the elements of A∩{0,…, N} in time N^O(1). We remark that a number of variants of the original result of Erdős have been developed including results of Erdős and Tetali <cit.> which prove the analogous result for higher order additive bases and results of Vu <cit.> regarding economical versions of Waring's theorem. The focus of this work is on “explicit” constructions. Erdős several times <cit.> asked for an explicit set A which affirmatively answers Sidon's question and, in fact, offered $100 for a solution <cit.>. We note that if one takes A to be the set of squares, then A+A contains all primes which are 1 4 and by the divisor bound A+A has multiplicities bounded by N^o(1). Therefore, if one is willing to assume strong number theoretic conjectures, one can take A to be the set of numbers n which are within O((log n)^O(1)) a square. The purpose of this note is to present an explicit construction unconditionally. Given a set A which is either a subset of N or Z/q Z for some q, we denote by σ_A(n) the number of representations n = a+a' or n ≡ a+a' q, where a, a' ∈ A. There is an explicit set A ⊂ N and absolute constants C, c > 0 such that for every n ∈ N, we have 1 ≤σ_A(n) ≤ C n^c/loglog n. We briefly discuss the meaning of the word “explicit”. By analogy with the long line of work on explicit Ramsey graphs, we adopt the convention that a construction is explicit if one may test membership n∈ A in (log n)^O(1)-time i.e. polynomial in the number of digits. In addition to satisfying this guarantee, our construction has the appealing feature that, given a suitable sequence of prime numbers, we can describe A with a short explicit expression (see <ref> and the proof of <ref>). We end this introduction with a brief overview of the (short) proof of <ref>. A crucial ingredient in our work is a construction of Rusza <cit.> which (for a prime p ≡ 3,5 8) produces a set A_p⊆Z/(p^2Z) such that A_p + A_p = Z/(p^2 Z) and σ_A_p(r) = O(1) for all r ∈Z/(p^2 Z). Given this, the key point is to consider a sequence of such primes p_1<p_2<… and define the set A in terms of its expansion with respect to the generalized base (p_1^2,p_2^2,…) essentially by forcing the i^th digit in the expansion to belong to A_p_i. By working upward from the smallest digit, one can use the property that A_p + A_p covers all residues in Z/(p^2 Z) to see that all natural numbers are represented. The fact that no number is represented too many times is similarly derived using that A_p + A_p is “flat”, in particular that the multiplicities are bounded by p^o(1). We remark that generalized bases have been utilized in a variety of questions related to Sidon sets, including works of Ruzsa <cit.>, Cilleruelo, Kiss, Ruzsa and Vinuesa <cit.>, and Pillate <cit.>. We note that for the purpose of simply obtaining an upper bound of N^o(1) in <ref>, one can actually choose the primes p sufficiently small (e.g. of size logloglog N say) and find a suitable set A_p by brute force enumeration. However Rusza's <cit.> construction is “strongly explicit” (i.e. membership can be tested in time O((log p)^O(1))), which allows us to take larger primes and thereby obtain a better upper bound. The computational bottleneck (given Rusza's construction) is finding the smallest prime in an interval [N,2N]. Under strong number-theoretic conjectures (e.g. Cramer's conjecture) finding such a prime would take time O((log N)^O(1)) due to the AKS primality testing algorithm <cit.>. Assuming this to be the case, we can choose the primes more carefully to obtain an improved upper bound of exp(O((log N)^1/2)) (see <ref>). The limiting feature of our construction now is that in the top block, one is forced to allow “all possibilities”. We believe obtaining an explicit construction achieving σ_A(N) ≤exp((log N)^) or better would be interesting. §.§ Notation Throughout this paper we let [N] = {0,…, N-1} and N = {0,1,2,…}. We let ⌊ x⌋ denote the largest integer less than or equal to x. We use standard asymptotic notation, e.g. f ≲ g if |f(n)| ≤ C |g(n)| for some constant C and all large enough n. We usually denote by c, C absolute constants which may change from line to line. § PROOF OF <REF> We will require the notion of a generalized base. Let b = (b_1,b_2,…) be an infinite set of integers such that b_i≥ 2. Given any integer x∈N there exists a representation x = a_n… a_1^b with 0≤ a_i≤ b_i - 1 such that x = ∑_i=1^na_i ∏_j<ib_j. Here an empty product (when i=1) is treated as 1. If one requires a_n ≠ 0 (e.g. does not have leading zeros) the representation is unique. When b_j = g for all j, we recover precisely the base-g expansion. A crucial piece in our construction is an “economical” modular additive basis of order 2 over Z/(p^2Z) due to Ruzsa <cit.>; the precise constant M in the result below has been studied in <cit.>. There exists an absolute constant M≥ 1 such that the following holds. Consider a prime p such that p ≡ 3,5 8. There exists a set A_p⊆Z/(p^2 Z) such that for all r∈Z/(p^2Z), we have 1≤σ_A_p(r)≤ M. Furthermore, given p and x∈Z/(p^2 Z), one can check whether x∈ A_p in time O((log p)^O(1)). For completeness, and especially in order to discuss the second part of the statement, we present the proof of <ref> in <ref>. We next need the following basic fact about deterministically finding primes, which is immediate via (say) the Sieve of Eratosthenes. Using a more sophisticated algorithm of Lagarias and Odlyzko <cit.>, one may obtain a run time of O(N^1/2+o(1)) in the statement below; runtimes of the form O(N^o(1)) remain a major open problem. Let N ≥ C_<ref>. Then one may produce the smallest prime p∈ [N,2N] such that p ≡ 3 8 in time O(N^1+o(1)). We now are in position to give the proof of <ref>. Let f: N→N be an arbitrary monotone increasing function such that f(k) ≥ C_0 for some large constant C_0 and all k. Let p_1 < p_2 < … be a sequence of primes such that p_k ≡ 3 8 and p_k ∈ [f(k), 2f(k)) is the least such prime.[That such a prime exists for f(k) larger than an absolute constant follows via the Siegel-Walfisz theorem (see e.g. <cit.>).] Define b = (b_1, b_2, …) by setting b_k = p_k^2 for all k ≥ 1. We are going to define our set A in terms of its expansion in the generalized base b. Namely, for each k≥ 1 let A_k ⊂{0, 1, …, p_k^2-1} be the set from <ref> (where we lift elements p_k^2 to their integer representatives) and consider the set A = ⋃_k≥ 1{a_k a_k-1… a_1^b,   a_j ∈ A_j for j =1, …, k-1, and a_k ∈{0, …, b_k-1}}. We begin by showing that A + A = N. For any n ∈N, we (uniquely) write n = n_k … n_1^ b for some k ≥ 1; we will construct the representation n = a+a' for a, a' ∈ A digit by digit. First, since A_1 is an order 2 additive basis mod b_1, there exist a_1, a_1'∈ A_1 such that n_1 ≡ a_1 +a'_1 b_1. Let c_1 = ⌊a_1+a'_1/b_1⌋∈{0,1} be the carry bit. Next, there exist a_2, a'_2∈ A_2 such that n_2 - c_1 ≡ a_2 + a_2' b_2. As before, define the carry bit c_2 and continue in the same fashion to produce sequences of digits a_1, …, a_k-1 and a'_1, …, a'_k-1 and a carry bit c_k-1∈{0,1}. Finally, let a_k = n_k - c_k ∈{0, …, b_k-1} and consider the elements a = a_k… a_1^ b,    a' = a'_k-1… a'_1^ b. By construction, we have n = a+a' and a, a' ∈ A. Next, we bound the number of possible representations n = a + a' with a,a' ∈ A. Write n = n_k… n_1^ b, a = a_ℓ… a_1^ b, and a' = a'_ℓ'… a'_1^ b, where ℓ, ℓ' ≤ k are the digit lengths of a and a'. We may assume that ℓ≤ℓ' (this costs us a factor of 2 in the number of representations). Since a, a' ∈ A, we have a_i ∈ A_i for i ≤ℓ-1 and a'_i∈ A_i for i ≤ℓ'-1 but the top digits a_ℓ (respectively, a'_ℓ') can be arbitrary elements of {0,…, b_ℓ-1} (respectively, {0,…, b_ℓ'-1}). By <ref>, we can choose a_1, a_1' such that n_1 ≡ a_1+a_1' b_1 in at most M ways. Given a choice of a_1, a_1', there are at most M pairs a_2, a_2' with n_2-c_1 ≡ a_2+a_2' b_2, where c_1 = ⌊a_1+a_1'/b_1⌋∈{0,1} is the carry. Continuing in this fashion for j=1, …, ℓ-1, we get that there are at most M^ℓ-1 ways to fix the first ℓ-1 digits a_1,…, a_ℓ-1 and a_1',…, a_ℓ-1'. We can fix a_ℓ and a'_ℓ in at most b_ℓ ways. Given this choice, the digits a'_ℓ+1, …, a'_ℓ' are uniquely determined. Putting this together, we obtain the following upper bound on the number of representations n=a+a': σ_A(n) ≤ 2 ∑_ℓ=1^k b_ℓ M^ℓ-1≤ 2 b_k M^k ≤ 8 f(k)^2 M^k, where we used b_k = p_k^2 ≤ (2f(k))^2 and M≥ 2. On the the other hand, b_j = p_j^2≥ f(j)^2 and so n ≥ b_1 … b_k-1≥ b_⌊ k/2⌋… b_k-1≥ f(⌊ k/2⌋)^k/2. Hence, k ≤2log n/log f(⌊ k/2⌋) and substituting this in <ref>, we obtain the bound σ_A(n) ≤ 4 f(k)^2 n^c / log f(⌊ k/2⌋). Note that the right hand side is n^o(1) for any sufficiently slowly growing function f. Owing to the computational considerations in the next paragraph, we take f(k) = k which leads to k ≲log n/loglog n and σ_A(n) ≲ n^c/loglog n. Finally, we quickly verify that testing membership a ∈ A can be done in time O((log a)^O(1)). Indeed, given a ∈ N, we can compute all primes p_k for k ≤ clog a in time (log a)^O(1) (<ref>), compute the base b expansion a = a_k… a_1 in time (log a)^O(1), and check that a_j ∈ A_j for j=1, …, k-1 in time O(k (log f(k))^O(1)) using <ref>. §.§ Modular construction and computational details We record the proof of <ref>, following Ruzsa <cit.>. For n ∈ Z, p ∈ N we write (n p) for the unique n' ∈{0, 1, …, p-1} congruent to n modulo p. The following is exactly <cit.>. Let p≡ 3,5 8. Define B_p ⊆{0,…, 2p^2} by B_p = {x + 2p (3x^2 p): x∈{0,…,(p-1)}} ∪{x + 2p (4x^2 p): x∈{0,…,(p-1)}} ∪{x + 2p (6x^2 p): x∈{0,…,(p-1)}}. We have sup_n∈Zσ_B_p(n) ≤ 18 and furthermore, for all 0≤ n<p^2, at least one of the six numbers n-p, n, n+p, n+p^2-p, n+p^2,n+p^2+p appears in the set B_p + B_p. Given <ref>, we prove <ref>. Let B_p' = B_p + {-p, 0, p} (viewed as a subset of Z) and set A_p = {x p^2: x∈ B_p'}⊆Z/(p^2 Z). Applying <ref>, we immediately have: * B_p' + B_p' ⊆ [-2p,5p^2] * For all 0≤ n<p^2, one of n or n+p^2 appears in B_p' + B_p' * We have that sup_n∈Zσ_B_p'(n) ≤ 9 · 18 = 162. Noting that n≡ n + p^2 p^2, it follows that A_p + A_p = Z/(p^2Z). Furthermore we have that sup_n∈Zσ_A_p(n) ≤ 6 · 9 · 18 = 594; this is immediate as B_p' + B_p' ⊆ [-2p,5p^2] and there are at most 6 representatives in this interval for a given residue modulo p^2. We now discuss the time complexity of testing membership in A_p. Given p, and x∈Z/(p^2 Z), we consider the unique representative x' ∈{0,…, p^2-1}. Noting that B_p'⊆ [-p, 2p^2 + p], it suffices by construction to test whether at least one of x' - p^2, x', x'+p^2, x'+2p^2 is in B_p'. This is equivalent to checking whether at least one of x' + {-p^2,0,p^2,2p^2} + {-p,0,p} is in B_p; in particular, one of at most 12 distinct given elements is in B_p. To test whether y∈{0,…,2p^2} is in B_p amounts to testing whether y = z + 2p (3z^2 p), y = z + 2p (4z^2 p), or y = z + 2p (6z^2 p) for an integer z∈{0,…, p-1}. Given y, the “candidate” z is forced to be the unique number in {0,…,p-1} equivalent to y p and we can then simply compute (3z^2 p), (4z^2 p), and (6z^2 p). This procedure clearly takes time O((log p)^O(1)). §.§ Assuming deterministic polynomial time algorithms for locating primes For the remainder of this section we will operate under the following assumption. There exists a deterministic algorithm which outputs the least prime which is 3 8 in the interval [N,2N] in time O((log N)^O(1)). To obtain a better upper bound on σ_A(n), we take the function f in the proof of <ref> to be f(k) = exp(c k). It follows from (<ref>) and (<ref>) that σ_A(n) ≲exp(Ck) and n ≳exp(c k^2), thus giving the bound σ_A(n) ≲exp(C √(n)). To test membership, we need to construct primes p of order at most exp(ck) ≈exp(c√(log n)) which can be done in (log n)^O(1)-time under <ref>. § ACKNOWLEDGEMENTS V.J. is supported by NSF CAREER award DMS-2237646. H.P. is supported by a Clay Research Fellowship and a Stanford Science Fellowship. M.S. is supported by NSF Graduate Research Fellowship Program DGE-2141064. D.Z. is supported by the Jane Street Graduate Fellowship. We thank Zach Hunter and Sándor Kiss for carefully reading the manuscript and suggesting improvements and references. amsplain0.bst
http://arxiv.org/abs/2405.09355v1
20240515140911
Vision-Based Neurosurgical Guidance: Unsupervised Localization and Camera-Pose Prediction
[ "Gary Sarwin", "Alessandro Carretta", "Victor Staartjes", "Matteo Zoli", "Diego Mazzatenta", "Luca Regli", "Carlo Serra", "Ender Konukoglu" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Vision-Based Neurosurgical Guidance G Sarwin et al. Computer Vision Lab, ETH Zurich, Switzerland Department of Neurosurgery, University Hospital of Zurich, Zurich, Switzerland Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, Bologna, Italy Vision-Based Neurosurgical Guidance: Unsupervised Localization and Camera-Pose Prediction This work funded by the SNSF (Project IZKSZ3_218786). Gary Sarwin1 Alessandro Carretta2,3 Victor Staartjes2Matteo Zoli3Diego Mazzatenta3Luca Regli2Carlo Serra2 Ender Konukoglu1 =============================================================================================================================================== Localizing oneself during endoscopic procedures can be problematic due to the lack of distinguishable textures and landmarks, as well as difficulties due to the endoscopic device such as a limited field of view and challenging lighting conditions. Expert knowledge shaped by years of experience is required for localization within the human body during endoscopic procedures. In this work, we present a deep learning method based on anatomy recognition, that constructs a surgical path in an unsupervised manner from surgical videos, modelling relative location and variations due to different viewing angles. At inference time, the model can map an unseen video's frames on the path and estimate the viewing angle, aiming to provide guidance, for instance, to reach a particular destination. We test the method on a dataset consisting of surgical videos of transsphenoidal adenomectomies, as well as on a synthetic dataset. An online tool that lets researchers upload their surgical videos to obtain anatomy detections and the weights of the trained YOLOv7 model are available at: <https://surgicalvision.bmic.ethz.ch>. § INTRODUCTION The environment during an endoscopic procedure poses numerous challenges to the surgeons. Successfully navigating this environment requires extensive experience combined with an extremely high level of anatomical understanding from the video feed. Challenges stem from the inherent nature of the human anatomy, such as non-rigid deformations, the absence of obvious boundaries between anatomical structures, and adverse events like bleeding that can happen during surgery, as well as the imaging device, such as limited field of view and light reflection. A variety of methods have been developed to assist neurosurgeons orient themselves during neurosurgeries. While computer-assisted neuronavigation has been a crucial tool and long-term research focus in the community <cit.>, it relies on preoperative imaging and brain shift hampers this reliability <cit.>. Additional real-time anatomical guidance can be achieved through interoperative MRI <cit.>, ultrasound <cit.>, the use of fluorescent substances <cit.>, awake surgery <cit.>, and electrophysiological neuromonitoring <cit.>. These techniques are efficient, relying on physical traits, however, also costly as they demand proficiency in a new imaging modality and, more importantly, requiring temporary surgery halts or instrument retractions for intraoperative information <cit.>. The pursuit of a more cost-effective real-time solution, independent of additional machinery, coupled with advancements in deep learning techniques, has propelled the development of vision-based localization methods. Various approaches, including structure from motion and simultaneous localization and mapping (SLAM) <cit.>, aim for 3D map reconstruction based on feature correspondence. <cit.>. Many vision-based localization methods rely on distinctive landmark positions and tracking them across frames for localization. Factors inherent to endoscopic neurosurgical videos, such as low texture, a lack of distinguishable features, non-rigid deformations, and disruptions <cit.>, degrade their performance. Consequently, alternative solutions are imperative to tackle these challenges. Despite recent progress, the task of detecting or segmenting anatomical structures, which could serve as a foundation for an alternative approach to neuronavigation, remains under-explored and poses an ongoing challenge. It is important to note that recognizing anatomy in surgical videos is more challenging than detecting surgical tools, given the absence of clear boundaries and variations in color or texture between anatomical structures. Interest in applying machine learning to neurosurgery has increased, especially in pituitary surgery, and first works have explored the possibility of detection and segmentation of anatomical structures <cit.>. <cit.> reported promising results of anatomical structure detection, and additionally, demonstrated a way to use the detections and their constellation to construct a common surgical path in an unsupervised manner, allowing relative localization during a surgery. Furthermore, in <cit.> the authors proposed a multi-task network to identify critical structures during the sellar phase of pituitary surgery. Their model, PAINet, jointly predicts segmentation of the two largest structures and centroids of the smaller and less frequently occurring structures. In this work, we extend the model proposed in <cit.> to include the viewing direction of the endoscope in the construction of the surgical path and mapping on the path. The viewing direction plays an important role in navigation during surgery since it heavily influences the structures and their constellations viewed by the endoscope. Thus, including it in the model has the potential to yield better surgical paths and more accurate guidance. Importantly, we include the viewing direction, represented as rotational angles around the x and y axis (i.e., pitch and yaw), in the model also in an unsupervised way without assuming the presence of any camera parameters for training. The unsupervised learning of the surgical path and viewing direction is facilitated by an Autoencoder (AE) architecture using a training set of videos. The AE is trained to reconstruct bounding boxes of a given frame based on a constrained latent representation. At inference time, the latent representation of a new frame provides relative positioning and viewing angles. Unlike approaches reconstructing a 3D environment and relying on landmarks for localization, our method aims to construct a common surgical roadmap and localizing within that map relying on bounding box detections. Relying on semantic bounding box detections, eliminates the need for tracking arbitrary landmarks, facilitating handling disruptions during surgery, such as bleeding, flushing and retractions. The learned mapping relies on the principle that the visibility, relative sizes and constellations of anatomical structures, which can be inferred from bounding box detections, strongly correlate with the position along the surgical trajectory and the viewing angle of the endoscope. The proposed approach is demonstrated on the transsphenoidal adenomectomy procedure, chosen for its relatively one-dimensional surgical path. This choice makes it well-suited for proving the concept of our suggested method. § METHODS §.§ Problem Formulation and Approach Let 𝐒_t denote a sequence of endoscopic video frames 𝐱_t-s:t, as illustrated in Figure <ref>. Here, s denotes the sequence length, and 𝐱_t ∈ℝ^w × h × c represents the t-th frame with w, h, and c indicating the width, height, and number of channels, respectively. Our primary goal is to embed the sequence 𝐒_t into a 3D latent dimension represented by the variable 𝐳=[z^1, z^2, z^3]. Our approach involves identifying anatomical structures in the sequence 𝐒_t and mapping the frame 𝐱_t to the latent space using the identified structures. Notably, z^1 serves as an implicit surgical atlas, signifying a path from the beginning of the procedure until the final desired anatomy. It is implicit because position information along the surgical path is unavailable for constructing the latent space. z^2 and z^3 represent pitch and yaw angles, respectively, forming a rotation matrix to predict the endoscope's viewing direction. Note that depth and camera pose information is rarely available in standard configurations, either because it is inaccessible or the functionality is missing. Extracting and collecting this data from medical devices can be cumbersome, or not possible due to the highly regulated environment and restrictions concerning modifications to these devices. Therefore, these are modeled and learned in an unsupervised way. To achieve this, object detection is performed on all frames 𝐱_t-s:t in 𝐒_t, resulting in a sequence of detections 𝐜_t-s:t denoted as 𝐂_t. A detection 𝐜_t ∈ℝ^n × 5 includes binary variables 𝐲_t = [y_t^0,…,y_t^n] ∈0,1^n indicating presence of structures in the t-th frame and bounding box coordinates 𝐛_t = [𝐛_t^0,…,𝐛_t^n]^T ∈ℝ^n × 4. An autoencoder architecture is employed to embed 𝐂_t into 𝐳_t=[z_t^1, z_t^2, z_t^3]. The encoder maps 𝐂_t to 𝐳_t, and the decoder reconstructs 𝐜̂_t = (ŷ_t, b̂_t), representing detections for the last frame in a given sequence. z^1_t is used to generate ŷ_t and b̂^I_t=[b̂_t^I,0,…,b̂_t^I,n]^T, the bounding box location reconstruction assuming pitch and yaw angles are zero, i.e., in a centered view. z^2_t and z^3_t are used to build a rotation matrix 𝐑_t, as shown in Figure <ref>, to model variability due to differences in viewing angles. The final bounding box reconstructions, b̂_t, are obtained by rotating the bounding-box center coordinates of b̂^I_t with 𝐑_t and keeping the same bounding-box size in the rotated position. The model parameters are updated to ensure that the prediction with the rotated center coordinates, 𝐜̂_t, fits 𝐜_t in a training set, as elaborated in the following. §.§ Object Detection Our method involves the identification of anatomical structures by detecting them as bounding boxes in video frames. For this purpose, we utilize the YOLOv7 network in the object detection phase of our pipeline <cit.>. The network is trained on endoscopic videos from a training set, where frames are sparsely labeled with bounding boxes. Subsequently, the trained network is applied to all frames of the training videos to generate detections for these classes on every frame. These detections serve as the input for training the subsequent autoencoder, which models the embedding. For further processing, the instrument class was omitted as this class does not necessarily correlate with the position on the surgical path. §.§ Embedding and Camera-Pose The encoder of the AE comprises multi-head attention layers followed by fully connected layers, ultimately reducing the input to 3 latent dimensions, where z_t^1 represents the position on the surgical path, and z_t^2 and z_t^3 represent pitch and yaw angles, which represent the rotation angles of the camera with respect to a centered view. A transformer-based encoder is employed to encode the temporal information in the sequence of detections. The decoder consists of two fully connected networks, the first generating the class probabilities 𝐲̂_t of 𝐜̂_t, and the second generating the corresponding bounding boxes 𝐛̂^I_t from z_t^1 in the centered view. To obtain bounding box reconstructions in the observed view, the center coordinates of the predicted bounding boxes 𝐛̂^I_t are rotated by multiplying them with the rotation matrix 𝐑̂_t, as explained in Section <ref>, to obtain 𝐛̂_t, the second component of 𝐜̂_t. The AE is designed to reconstruct only the last frame 𝐜_t in 𝐂_t since z_t^1 is intended to correspond to the current position. However, it considers s previous frames to provide additional information in determining the latent representation 𝐳_t of 𝐱_t. The loss function consists of a classification loss and a bounding box loss, the latter being calculated only for the classes present in the ground truth. In the current setup, the fully connected network that produces the bounding boxes can classify any view as the centered view, since the bounding box coordinates can be rotated to fit the input even if the centered view is not at pitch and yaw angles of zero. Therefore, for increased interpretability, we enforce that the output is centered by feeding the output of the bounding box network together with 𝐲_t once again into the encoder by stacking the output s-times to achieve the same input size, and add the predicted ẑ_t^2 and ẑ_t^3, which should both be zero if the view is centered, to the loss function. This leads to the objective to minimize for the t-th frame in the m-th training video: ℒ_m, t= -∑_i=1^n(y_m, t^i log(ŷ_m, t^i)+(1-y_m, t^i) log(1-ŷ_m, t^i)) +∑_i=1^n y_m, t^i|𝐛_m, t^i-𝐛̂_m, t^i|+√((ẑ_m, t^2)^2)+√((ẑ_m, t^3)^2), where |·| denotes the l_1 loss. The total training loss is then the sum of ℒ_m,t over all frames and training videos. The proposed loss function can be viewed as maximizing the joint likelihood of a given 𝐲 and 𝐛 with a probabilistic model utilizing a mixture model for the bounding boxes. § EXPERIMENTS AND RESULTS §.§ Datasets In this work two datasets were utilized, a medical dataset and a synthetic dataset. Medical Dataset: The dataset utilized for object detection contains 166 videos documenting transsphenoidal adenomectomy procedures in 166 patients. These videos, captured using a variety of endoscopes across multiple facilities over 10 years, were made accessible under general research consent. Expert neurosurgeons annotated the videos, encompassing 16 distinct classes, namely, 15 anatomical structure classes and one class for surgical instruments. The dataset encompasses approximately 19,000 labeled frames. Each class has one instance per video, except for the instrument class since various instruments are categorized under the same class. Among the 166 videos, 146 were allocated for training and validation purposes, while the remaining 20 were purposed for testing. Despite the utilization of data from various centers, it is important to acknowledge potential biases introduced by the geographical vicinity of these centers. Synthetic Dataset: For quantitative analysis, a synthetic dataset was created in Blender <cit.> with ground truth. A 3D environment was built to represent a surgical path with various structures. Ground-truth object detection labels could be extracted from the software. Eight different objects were modeled, akin to different anatomical structures, with a single instance per object to emulate the medical setting. To train the AE, a video moving through the environment was recorded with random viewing directions, moving forward and backward several times. In total, the data consists of 16502 frames with corresponding ground-truth object detection labels. The model is depicted in Figure <ref>. §.§ Implementation Details The YOLO network was trained with identical parameters and implementation as in <cit.>, which follows standard implementation as reported in <cit.>. The AE integrates a transformer encoder comprising six transformer encoder layers, each with five heads, and an input size of s × 15 × 5, where s is established as 64 frames. Following this, the output dimension of the transformer encoder undergoes reduction through three fully connected layers to sizes of 512, 256, and 128, respectively, employing rectified linear unit (ReLU) activation functions between layers. Subsequently, the final fully connected layer reduces the dimensionality to 𝐳_t ∈ℝ^3, employing a sigmoid activation function to obtain z_t^1, and a tanh activation function for z_t^2 and z_t^3. Moreover, the decoder composed of two fully connected networks, namely the class decoder and bounding box decoder, have two fully connected layers, elevating the dimensionality of the latent variable z_t^1 from 1D to 8, to 𝐧, and from 1D to 32, to 𝐧× 4, correspondingly. The initial layer of both of those networks is succeeded by a ReLU activation function, while the final layer adopts a sigmoid activation function. Furthermore, for z_t^2 and z_t^3, an output of -1 and 1 represents a -90 and 90 degrees rotation around the respective axis, these outputs are then converted to radians for the construction of the rotation matrix. For the AE's training, the AdamW optimizer <cit.> is employed in conjunction with a warm-up scheduler, which linearly increases the learning rate from 0 to 1e-4 over a span of 60 epochs. The model is trained for a total of 2500 epochs for the synthetic, and 270 epochs for the medical dataset. The model has approximately 4.6M parameters in the setup for the medical dataset. §.§ Results Anatomical Detection: The YOLOv7 trained on the medical dataset reaches a mean average precision 53.4% at a 0.5 intersection over union threshold as also was reported in <cit.>. Quantitative Assessment of the Embedding: Due to the absence of all camera parameters, the exact modeling of rotation is an ill-posed task. However, we show that even though we introduce significant simplifications and substitute the homography matrix with a straightforward rotation matrix, we can still approximate the angles of rotation. We test on 1022 sequences which were recorded in Blender under random viewing angles moving through the model. We report a mean error in angle predictions of 0.43, and 0.69 with a standard deviation of 2.38 and 1.74 degrees, for the pitch and yaw angle, respectively. Additionally, we examine the correlation between the predicted location along the surgical path with the real depth of the synthetic model for a video traveling through the model. We do this for both our AE and the model proposed in <cit.>. We expect our AE that takes rotation into account to embed space more representative compared to an AE that does not consider rotation. More specifically, our AE should be able to map depth to the surgical path, and account for different views by taking the rotation into account, whereas the previous model that only maps to the surgical path without rotation would need to occupy more space in the 1D latent space to describe various views of the same location and thereby describing depth less accurately. Our AE achieved a Pearson correlation coefficient of 0.97 whereas the previous model proposed in <cit.> achieved 0.94. Qualitative Assessment of the Embedding: Of greater importance is that the model can tell the surgeon whether the viewing direction is right, or whether the camera should be pointed in another direction, and additionally can tell the surgeon whether in that direction should be looked more or less. In Figure <ref> sequences are shown for the medical dataset, together with the predictions 𝐳_t. The arrows plot the negative predictions of z_t^2 and z_t^3 to visualize the general direction the camera is pointed in, and the magnitude. For illustrative purposes a camera model is depicted to visualize the camera's orientation. The sequences visualize the ability of the model to extract various viewing directions from the images which correspond to the movement of the camera between the images in one sequence and that the predicted viewing directions are in line with the movement of visual landmarks between images within a sequence. The idea is that by having a reference visualization of the desired or planned viewing direction that is required to find a certain anatomical structure or for orientation purposes, i.e. an arrow as in Figure <ref>, the surgeons can adjust the current orientation of the endoscope with respect to the reference arrow, without the need for exact angles, but rather which direction it should be pointed in, and more or less so. Videos visualizing the results are supplied in the supplementary material. Finally, in the bottom row of Figure <ref>, we depict images that are mapped to the same location by our AE. These images show the same anatomical position during different stages of the surgery as well as from different viewpoints. When encoding the same images using the AE proposed in <cit.>, these images are mapped within a range of 0.81% of the latent space instead of a single point. This demonstrates that the proposed AE that takes rotation into account can embed space more coherently and confirms the results of quantitative correlation experience. § CONCLUSION In this study, we introduced an approach to neuronavigation leveraging deep learning techniques. Our proposed method is image-based and utilizes bounding box detections of anatomical structures to orient itself within a surgical path learned from a dataset of surgical videos, and additionally provides feedback on the direction the camera, i.e. endoscope, is pointing in. This is facilitated through an AE architecture trained without supervision. Our approach enables the localization and prediction of anatomical structures along the surgical trajectory, both forward and backward, similar to functionalities seen in mapping applications. However, our work also comes with certain limitations. Primarily, we have confined our focus to a single surgical procedure in this preliminary investigation. Extending to other surgical procedures is a goal for future research. Additionally, our proposed method can be integrated with SLAM techniques, as well as leveraging guidance from pre- or intra-operative MRI. Both of these extensions constitute areas of future exploration. Another limitation lies in the fact that the latent dimension only offers relative positional and angular encoding. To surpass this limitation, additional labeling of the actual positions along the surgical path may be necessary, as well as camera parameters. splncs04 10 Hartl2013WorldwideSurgery Härtl, R., Lam, K.S., Wang, J., Korge, A., Kandziora, F., Audigé, L.: Worldwide survey on the use of navigation in spine surgery. World neurosurgery 79(1), 162–172 (1 2013) Orringer2012NeuronavigationTrends Orringer, D.A., Golby, A., Jolesz, F.: Neuronavigation in the surgical management of brain tumors: current and future trends. Expert review of medical devices 9(5), 491–500 (9 2012) Iversen2018AutomaticNeuronavigation Iversen, D.H., Wein, W., Lindseth, F., Unsgård, G., Reinertsen, I.: Automatic Intraoperative Correction of Brain Shift for Accurate Neuronavigation. World neurosurgery 120, e1071–e1078 (12 2018) Berkmann2014IntraoperativeAdenoma Berkmann, S., Schlaffer, S., Nimsky, C., Fahlbusch, R., Buchfelder, M.: Intraoperative high-field MRI for transsphenoidal reoperations of nonfunctioning pituitary adenoma. Journal of neurosurgery 121(5), 1166–1175 (11 2014) Staartjes2021MachineSurgery Staartjes, V.E., Volokitin, A., Regli, L., Konukoglu, E., Serra, C.: Machine Vision for Real-Time Intraoperative Anatomic Guidance: A Proof-of-Concept Study in Endoscopic Pituitary Surgery. Operative neurosurgery (Hagerstown, Md.) 21(4), 242–247 (10 2021) Burkhardt2014High-frequencyApproach Burkhardt, J.K., Serra, C., Neidert, M.C., Woernle, C.M., Fierstra, J., Regli, L., Bozinov, O.: High-frequency intra-operative ultrasound-guided surgery of superficial intra-cerebral lesions via a single-burr-hole approach. Ultrasound in medicine & biology 40(7), 1469–1475 (2014) Ulrich2012ResectionUltrasound Ulrich, N.H., Burkhardt, J.K., Serra, C., Bernays, R.L., Bozinov, O.: Resection of pediatric intracerebral tumors with the aid of intraoperative real-time 3-D ultrasound. Child's nervous system : ChNS : official journal of the International Society for Pediatric Neurosurgery 28(1), 101–109 (1 2012) Stummer2017RandomizedGliomas Stummer, W., Stepp, H., Wiestler, O.D., Pichlmeier, U.: Randomized, Prospective Double-Blinded Study Comparing 3 Different Doses of 5-Aminolevulinic Acid for Fluorescence-Guided Resections of Malignant Gliomas. Neurosurgery 81(2), 230–239 (8 2017) Hadjipanayis2015WhatGliomas Hadjipanayis, C.G., Widhalm, G., Stummer, W.: What is the Surgical Benefit of Utilizing 5-Aminolevulinic Acid for Fluorescence-Guided Surgery of Malignant Gliomas? Neurosurgery 77(5), 663–673 (8 2015) Hervey-Jumper2015AwakePeriod Hervey-Jumper, S.L., Li, J., Lau, D., Molinaro, A.M., Perry, D.W., Meng, L., Berger, M.S.: Awake craniotomy to maximize glioma resection: methods and technical nuances over a 27-year period. Journal of neurosurgery 123(2), 325–339 (8 2015) DeWittHamer2012ImpactMeta-analysis De Witt Hamer, P.C., Robles, S.G., Zwinderman, A.H., Duffau, H., Berger, M.S.: Impact of intraoperative stimulation brain mapping on glioma surgery outcome: a meta-analysis. Journal of clinical oncology : official journal of the American Society of Clinical Oncology 30(20), 2559–2565 (7 2012) Sanai2008FunctionalResection Sanai, N., Mirzadeh, Z., Berger, M.S.: Functional outcome after language mapping for glioma resection. The New England journal of medicine 358(1), 18–27 (1 2008) Staartjes2020MachineSurvey Staartjes, V.E., Stumpo, V., Kernbach, J.M., Klukowska, A.M., Gadjradj, P.S., Schröder, M.L., Veeravagu, A., Stienen, M.N., van Niftrik, C.H., Serra, C., Regli, L.: Machine learning in neurosurgery: a global survey. Acta Neurochirurgica 162(12), 3081–3091 (12 2020) Grasa2011EKFSequences Grasa, O.G., Civera, J., Montiel, J.M.: EKF monocular SLAM with relocalization for laparoscopic sequences. Proceedings - IEEE International Conference on Robotics and Automation pp. 4816–4821 (2011) Ozyoruk2021EndoSLAMVideos Ozyoruk, K.B., Gokceler, G.I., Bobrow, T.L., Coskun, G., Incetan, K., Almalioglu, Y., Mahmood, F., Curto, E., Perdigoto, L., Oliveira, M., Sahin, H., Araujo, H., Alexandrino, H., Durr, N.J., Gilbert, H.B., Turan, M.: EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos. Medical Image Analysis 71, 102058 (7 2021) Mahmoud2016ORBSLAM-basedReconstruction Mahmoud, N., Cirauqui, I., Hostettler, A., Doignon, C., Soler, L., Marescaux, J., Montiel, J.M.: ORBSLAM-based Endoscope Tracking and 3D Reconstruction. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10170 LNCS, 72–83 (8 2016) Leonard2018EvaluationData Leonard, S., Sinha, A., Reiter, A., Ishii, M., Gallia, G.L., Taylor, R.H., Hager, G.D.: Evaluation and Stability Analysis of Video-Based Navigation System for Functional Endoscopic Sinus Surgery on In Vivo Clinical Data. IEEE Transactions on Medical Imaging 37(10), 2185–2195 (10 2018) Sarwin2023Unsupervised Sarwin, G., Carretta, A., Staartjes, V., Zoli, M., Mazzatenta, D., Regli, L., Serra, C., Konukoglu, E.: Live image-based neurosurgical guidance and roadmap generation using unsupervised embedding. Information Processing in Medical Imaging. IPMI 2023. Lecture Notes in Computer Science, 13939, 107–118 (06 2023) Das2023Multitask Das, A., Khan, D.Z., Williams, S.C., Hanrahan, J.G., Borg, A., Dorward, N.L., Bano, S., Marcus, H.J. and Stoyanov, D.: A Multi-task Network for Anatomy Identification in Endoscopic Pituitary Surgery. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, 14228, 472–482 (10 2023) Wang2022YOLOv7:Detectors Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors (7 2022) Blender Community, B. O. (2018). Blender - a 3D modelling and rendering package. Stichting Blender Foundation, Amsterdam. Retrieved from http://www.blender.org Loshchilov2017DecoupledRegularization Loshchilov, I., Hutter, F.: Decoupled Weight Decay Regularization. 7th International Conference on Learning Representations, ICLR 2019 (11 2017)
http://arxiv.org/abs/2405.08685v1
20240514150549
A library of meteoroid environments encountered by spacecraft in the inner solar system
[ "Althea V. Moorhead", "Katie Milbrandt", "Aaron Kingery" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.IM" ]
NASA Meteoroid Environment Office, Marshall Space Flight Center, Huntsville, AL 35812, USA Department of Aerospace Engineering, Auburn University, Auburn, Alabama 36849, USA ERC, Inc., Marshall Space Flight Center, Huntsville, Alabama 35812 NASA's Meteoroid Engineering Model (MEM) is designed to provide aerospace engineers with an accurate description of potentially hazardous meteoroids. It accepts a spacecraft trajectory as input and its output files describe the flux, speed, directionality, and density of microgram- to gram-sized meteoroids relative to the provided trajectory. MEM provides this information at a fairly fine level of detail in order to support detailed risk calculations. However, engineers and scientists in the very early planning stages of a mission may not yet have developed a trajectory or acquired the tools to analyze environment data. Therefore, we have developed an online library of sample MEM runs that allow new users or overloaded mission planners to get a quick feel for the characteristics of the meteoroid environment. This library provides both visualizations of these runs and input files that allow users to replicate them exactly. We also discuss the number of state vectors needed to obtain an accurate representation of the environment encountered along our sample trajectories, and outline a process for verifying that any given trajectory is adequately sampled. meteoroids space environments risk assessment § INTRODUCTION In order to design reliable spacecraft, aerospace engineers must assess and mitigate a wide variety of risks.https://fireballs.ndc.nasa.gov/mem/library/ A full understanding of all risks – which can include phenomena such as spacecraft charging, corrosion, and impacts from orbital debris and meteoroids – would require an impractically extensive knowledge of a wide variety of scientific and engineering disciplines. As a result, those conducting risk assessments must content themselves with a surface-level understanding of most fields and consult with experts when necessary. Primers and libraries, if well designed, can assist the risk assessment process by quickly familiarizing users with the basics of a field and reducing their dependence on external subject matter experts. The members of NASA's Meteoroid Environment Office (MEO) often serve as subject matter experts on the topic of meteoroid impacts. We advise programs on the basic characteristics of the meteoroid environment; issue meteor shower forecasts and advisories <cit.>; and develop and distribute the Meteoroid Engineering Model (MEM), a stand-alone piece of software that generates meteoroid environment data that is specific to a user's spacecraft trajectory <cit.>. We have found that users or would-be users of MEM have widely varying levels of familiarity with the risk posed by meteoroid impacts and ballistic limit calculations. The missions which these users are planning can also vary dramatically in type and maturity. In many cases, users may not have a quantitative description of their spacecraft's trajectory, which makes early risk assessment very difficult. In this paper, we present a library of sample spacecraft trajectories and corresponding MEM run results that is designed to help users conducting early risk assessments or learning to use the software. This library allows such users to quickly assess the typical meteoroid flux for various orbits and visualize the distribution of meteoroid speeds, directionality, and bulk particle density. In some cases, the user may find that one of our sample trajectories is similar to their own planned mission trajectory, and could potentially use our data files to conduct a preliminary risk assessment. We do not construct original trajectories; instead, we sample known trajectories of existing spacecraft at a number of different locations in the inner solar system (see Section <ref>). We discuss our trajectory sampling in Section <ref> and present a method for determining the trajectory resolution needed to attain a given precision in the meteoroid fluxes reported by MEM (Section <ref>). Finally, we describe the visualizations and data provided in our online library and discuss their possible applications (Section <ref> and <ref>). § TRAJECTORY SELECTION We downloaded or generated trajectories for at least one spacecraft orbiting each planet in the inner solar system as well as the Moon. We have also included one trajectory that does not orbit any planet, but instead corresponds to the route by which the Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft traveled from Earth to Mars. Most of our selected trajectories are those of real US government (NASA or NOAA) spacecraft whose ephemerides are available through the Jet Propulsion Laboratory's Horizons ephemeris service.[https://ssd.jpl.nasa.gov/?horizons] We chose these particular spacecraft because they follow orbits that could be considered representative of a class of orbits: for instance, Aqua is an example of a satellite on a Sun-synchronous orbit. In one case, no suitable trajectory was available from Horizons: we therefore instead sampled a near-rectilinear halo orbit (NRHO) generated by <cit.>. §.§ Earth-orbiting trajectories Most spacecraft orbit Earth; therefore, we have opted to generate a larger number of sample trajectories for Earth-orbiting spacecraft than for those orbiting other bodies. §.§.§ ISS: low Earth orbit We selected the International Space Station (ISS) as an example spacecraft in low Earth orbit (LEO). LEO describes orbits with altitudes within 2000 km of Earth's surface as well as the region of space where these orbits lie. This is a heavily populated region of space and is often chosen for communications or Earth observation satellites. The ISS orbits at an altitude just over 400 km above Earth's surface and has an orbital period between 90 and 93 minutes; its trajectory is depicted in Fig. <ref>.[Additional views are available at https://fireballs.ndc.nasa.gov/mem/library/iss.html] Its altitude is small compared to Earth's radius, and this tends to protect the nadir (or Earth-facing) side of the spacecraft from meteoroid impacts (readers can jump ahead to Section <ref> for a visualization). This effect is known as planetary shielding and is significant when the spacecraft is within a planetary radius of the planet's surface <cit.>. §.§.§ Aqua: Sun-synchronous orbit We selected Aqua as an example spacecraft in Sun-synchronous orbit (SSO). SSO is a type of near-polar LEO with an altitude and inclination chosen such that the orbit precesses around Earth once per year. This tends to preserve the angle between the orbit and the Sun-Earth vector. Such satellites will therefore always view locations on Earth's surface at the same illumination angle. This orbit is popular with imaging and weather satellites. Aqua's trajectory is depicted in Fig. <ref>. Aqua is part of NASA's Earth Observing System (EOS). It has an orbital altitude of about 702-703 km and an orbital period of 99 minutes. Like the ISS, Aqua's nadir surface will be largely shielded from the meteoroid environment by Earth. §.§.§ GOES-14: Geostationary orbit A geostationary or geosynchronous equatorial orbit (GEO) is one in which the spacecraft's mean motion matches Earth's rotational period. A spacecraft in GEO orbits Earth exactly once per sidereal day and thus is always positioned directly above the same point on Earth's surface. GEO orbits are used by one-way communications and weather satellites, but cannot communicate with latitudes more than 81^∘ from the equator. The GOES-14 satellite is part of the National Oceanic and Atmospheric Administration (NOAA)'s Geostationary Operational Environmental Satellite (GOES) system. It generally maintains a geographical longitude of ∼ 105^∘W. With a standard GEO altitude of 35,786 km (5.6 Earth radii), GOES-14 benefits very little from planetary shielding. The total flux on the spacecraft varies very little, although the apparent directionality changes as the spacecraft changes its orientation relative to Earth's direction of motion about the Sun. The trajectory is depicted in Fig. <ref>. §.§.§ JWST: Sun-Earth L2 The James Webb Space Telescope (JWST) is a large, multipurpose infrared space observatory. It was launched in late 2021 and its first images were released in July 2022. The spacecraft follows a halo orbit around the Sun-Earth L2 Lagrange point and thus remains near Earth. The trajectory is depicted in Fig. <ref>. It would be incorrect to say that JWST follows an interplanetary trajectory, as it does not travel between planets. However, JWST does not orbit Earth and will not “see” the effects of Earth on the meteoroid environment. We have therefore placed JWST in this section because it lies at the edge of Earth's sphere of gravitational influence and the environment it encounters is best described as interplanetary (that is, it is relatively unaffected by gravitational focusing and planetary shielding). JWST's orbital motion about L2 is quite slow and therefore we expect it to encounter a fairly constant flux. §.§ Moon-orbiting trajectories §.§.§ LADEE: low lunar orbit The Lunar Atmosphere and Dust Environment Explorer (LADEE) was a NASA mission to study the lunar exosphere. It orbited the Moon for just over 6 months in a retrograde low lunar orbit. In general, lunar orbits are unstable and it is apparent from LADEE's ephemeris that the spacecraft adjusted its orbital period approximately 20 times in a 6 month period. The trajectory is depicted in Fig. <ref>. LADEE's altitude ranged from just a few km above the lunar surface to 148 km. We therefore expect its nadir surfaces to be fairly protected from the meteoroid environment; note, however, that dust particles originating from the lunar surface are not modeled in MEM. Users will need to use a separate model for these additional particles <cit.>. §.§.§ Near-rectilinear halo orbit A near-rectilinear halo orbit (NRHO) is being considered for NASA's Lunar Gateway. This orbit is so-named because its orbital eccentricity relative to the Moon varies, and some portions of the trajectory are therefore straighter than that of an ellipse. No spacecraft have used a lunar NRHO trajectory in the past, and we therefore cannot use our usual approach of downloading a sample trajectory from Horizons. Instead, we extracted state vectors using the spy utility[https://naif.jpl.nasa.gov/naif/utilities.html] from a binary Spice file prepared by <cit.>. The trajectory is depicted in Fig. <ref>. In our example NRHO, the spacecraft circles the Moon, which in turn orbits Earth, which in turn orbits the Sun. In order to fully cover all possible Sun-Earth-Moon-spacecraft positions, we would need to sample over a 19-year Metonic cycle. However, an object on an NRHO spends most of its time at large selenocentric distances, moving slowly relative to the Moon. The overall meteoroid environment is therefore fairly stable and we thus restrict our analysis to a one-year period. Please note that there is a “family” of NRHO orbits around any L1, L2, or L3 Lagrange point. This orbit and the corresponding meteoroid environment should be considered illustrative of the Earth-Moon L2 NRHO family; the environment encountered along another member of the orbit family will differ from that in our library. §.§ Orbits around other planets MEM 3 accepts trajectories with heliocentric distances between 0.2 and 2 au; within this range, it automatically detects nearby planets and accounts for their gravitational influence on the meteoroid environment. Thus, the software is capable of handling trajectories of spacecraft orbiting Mercury, Venus, and Mars. §.§.§ MESSENGER: Mercurian orbit MESSENGER (Mercury Surface, Space Environment, Geochemistry, and Ranging) was a NASA mission to study Mercury's surface and magnetic field. It orbited Mercury for four Earth years, maintaining an elliptical orbit that limited the spacecraft's exposure to the hot Hermean surface. Its trajectory is depicted in Fig. <ref>. Mercury orbits the Sun in 88 days; we therefore chose to sample the first 88 days of MESSENGER's primary mission. The spacecraft's orbit does not precess in an inertial frame; thus, one Mercury orbit covers the full range of Sun-Mercury-MESSENGER relative positions. §.§.§ Venus Express: Venusian orbit Venus Express was a European Space Agency (ESA) mission to study the atmosphere of Venus. It used a polar, elliptical orbit for its primary mission and most of its extended missions. Its trajectory is depicted in Fig. <ref> Venus orbits the Sun in 225 days; we therefore sampled the first 225 days of Venus Express's mission. Like MESSENGER, Venus Express's orbit does not precess relative to the planet in an inertial frame. §.§.§ MRO: low Martian orbit NASA's Mars Reconnaissance Orbiter (MRO) has been studying the Martian surface since 2006 and is still operational. Unlike MESSENGER and Venus Express, MRO remains within a few hundred km of the Martian surface on a nearly circular orbit. Its near-polar orbit precesses once per Martian year; MRO is therefore in a Martian Sun-synchronous orbit. We have sampled the first 687 days (one Martian year) of MRO's primary science phase. The trajectory is depicted in Fig. <ref>. §.§.§ MAVEN: Martian orbit NASA's Mars Atmosphere and Volatile Evolution (MAVEN) orbiter arrived at Mars in late 2014. After surviving an encounter with comet C/2013 A1 (Siding Spring), MAVEN begin primary science operations in November of that year. MAVEN maintains an eccentric orbit that is highly inclined but not polar. Within its first Martian year (687 Earth days), we see an irregular drift in the orbital period and a fairly complex distribution of spacecraft locations (see Fig. <ref>). §.§ Interplanetary space §.§.§ MAVEN: Earth-to-Mars transfer As mentioned above, MEM automatically determines whether any massive bodies lie near the user's trajectory. While MEM permits the user to select any of the inner solar system planets as the origin of the input trajectory, it does not require that the user place the origin at the nearest planet. This flexibility in selecting the coordinate origin is particularly useful for trajectories in which a spacecraft travels from one planet (or the Moon) to another. We use MAVEN once again for our example trajectory. In this case, we follow MAVEN from its launch from Earth to its arrival at Mars (see Fig. <ref>). We use Barycentric Dynamical Time (TDB) in all our trajectory files, but this is particularly critical for a transfer trajectory. The TDB time scale increases in a predictable fashion, unlike Coordinated Universal Time (UTC), which is frequently updated to include leap seconds. Currently, the two timescales differ by 69 seconds. When the nearest massive body differs from the origin of the input trajectory's coordinate system, however, it is important that the user provide times in TDB so that MEM can correctly compute the distance to the nearest massive body. For most of this portion of its trajectory, MAVEN is fully exposed to the interplanetary meteoroid environment. The meteoroid flux decreases with heliocentric distance, and therefore the rate of impacts onto MAVEN decreased as it moved away from the Sun. § TRAJECTORY SAMPLING We used the Horizons application programming interface (API) to sample state vectors for our selected spacecraft. In general, we opted to sample over either [1] the length of time that the orbited body requires to orbit the Sun once (ISS, Aqua, GOES-14, NRHO, MESSENGER, Venus Express, MRO, MAVEN), [2] the length of time that the spacecraft itself requires to orbit the Sun (JWST), or [3] the duration of the mission (LADEE, MAVEN transfer). The exact intervals are provided in Table <ref>. Initially, we attempted to use a constant sampling cadence, taking care to select a cadence that does not evenly divide the orbital period. For instance, the ISS has an orbital period of about 93 minutes and we selected a sampling cadence of 17 minutes. This would result in only a few samples from each of the 5000+ orbits ISS completes in a year, but the mean anomalies sampled would vary from one orbit to the next. However, this approach proved to be troublesome in many cases: for instance, LADEE changed its orbital period frequently during the course of its mission. We found that any constant sampling cadence we tried resulted in periodicities in the sampled mean anomaly (see Fig. <ref>). We decided to sidestep the orbit sampling interval issue by instead sampling at random times within the chosen interval. We set the number of samples to correspond to the length of the time interval divided by our original cadence choice of 17 minutes. We rounded the dates to the nearest 10^-6 day (less than 0.1 s); if two dates rounded to the same value, we discarded one and drew a new random date. We then used the Horizons API to download the trajectory data corresponding to these times in batches of 50. We downloaded data in a coordinate system that is aligned with the International Celestial Reference Frame (ICRF). Thus, the x-y plane is parallel to Earth's mean equator at the J2000 epoch. We set the coordinate center to be that of the orbited body (Earth for ISS, Mars for MRO, etc.). We made one exception: the MAVEN transfer trajectory was downloaded in ecliptic coordinates, with the Sun at the origin. For spacecraft orbiting bodies other than Earth, we also downloaded the same trajectory relative to what Horizons calls the “body mean equator and node of date” reference plane. We use this latter coordinate frame for orbit visualizations, but not as MEM input. Prior to using any trajectory, we verify that the orbit is evenly sampled using a chi-squared test on either mean anomaly (for orbits with max(e) > 0.02) or the angle from the equator (for orbits with max(e) < 0.02). Here, e refers to the orbital eccentricity, and the angle from the equator can be computed as: ω + f = sin^-1( z/r sin i) where ω is the argument of pericenter, f is true anomaly, z is the vertical offset from the equator of the orbited body, r is the distance from the center of the orbited body, and i is the orbital inclination. We rounded our randomly generated Julian dates to 6 significant figures; we took care to do this rounding prior to submitting them to the Horizons API. We output dates in Barycentric Dynamical Time (TDB) and the corresponding state vectors in units of km and km s^-1, as required by MEM. § TRAJECTORY RESOLUTION In order to use MEM, users must convert their trajectory to a series of Julian dates and state vectors. If an orbit or trajectory is sampled too infrequently, the resulting environment description may be inaccurate or noisy. On the other hand, if the trajectory is sampled too finely, the user may find themselves waiting days or even weeks for their MEM run to finish (see Table <ref> for approximate run times). Unfortunately, we cannot provide a single recommended minimum number of state vectors; the needed number will depend on the trajectory and the user's needs. To help mitigate this, MEM offers a “random draw” option; when selected, MEM analyzes only a random sample of state vectors from the user's input file. This option can be used to complete short test runs or to investigate whether results might converge for a smaller number of state vectors. MEM can also output files that provide the standard deviation of the flux along the trajectory. This option can be used in conjunction with a short random draw run to predict the number of state vectors needed for convergence. The central limit theorem for sample means <cit.> predicts that the mean of a sample of size n drawn from a population with mean μ and standard deviation σ will tend to follow a normal distribution with mean μ and standard deviation: σ_m = σ/√(n) Furthermore, the sample mean (x̅) and variance (s^2) are unbiased estimators for the population mean and variance:[This does not hold for the standard deviation, s; the unbiased estimator for the population standard deviation includes an additional correction factor of √(2/(n-1)) Γ( n/2) / Γ( n-1/2) <cit.>. However, for sample sizes greater than 100, this correction factor adjusts the estimate by less than 0.25% and we therefore omit it from our equations.] x̅ = 1/n∑_i=1^n x_i s^2 = 1/n-1∑_i=1^n (x_i - x̅)^2 These equations can be combined and inverted to estimate the needed sample size, n. This behavior is illustrated in Fig. <ref>. §.§ Method for estimating trajectory sample size We suggest the following process for determining the needed sample size. First, complete a random-draw run by setting the desired random draw size to 100 state vectors; a run of this size should take minutes, not hours. Be sure to select the standard deviation file output option so that the sample standard deviations are saved to file. Second, identify the flux quantities of interest <cit.>. If the user is uncertain about which quantities to select, we suggest using the so-called “cube fluxes” provided in “cube_avg.txt” and “cube_std.txt.” (There are two such sets of files for each run; one for the high-density population of meteoroids and another for low-density meteoroids.) An example is shown in Fig. <ref>. We will denote the value or values from the average flux file as x̅ or x̅_j, while the value or values from the standard deviation file will be denoted s or s_j. Here, j is an optional index used to distinguish between multiple flux components of interest. Third, select the desired accuracy for the final run (p) and the acceptable probability that the final run does not satisfy that accuracy requirement (α). Select or calculate the z-statistic that corresponds to your chosen α value using the 68-95-99.7 rule, a z-table (e.g., see Table <ref>) or the following equation: z = √(2) erf^-1 (1 - α) where erf^-1 is the inverse error function. Finally, estimate the number of state vectors needed to obtain the desired accuracy (p) for the jth quantity at the desired level of confidence (1-α): n ≥ n_σ, j = ( z s_j/p x̅_j)^2 If multiple quantities are used – such as the total flux from each population, or the flux on each surface listed in the “cube” file – the sample size should be the largest value of n_j. The user can also consider requiring convergence to within a constant value that is 100p% of the largest flux element under consideration. This is equivalent to replacing x̅_j with sup_j x̅_j in the denominator of Eq. <ref>. This helps the user avoid situations in which an element with a small flux but large relative variation drives the sample size to large numbers. For instance, when a spacecraft in low Earth orbit keeps the same surface pointed towards Earth, the flux of meteoroids on that surface will be quite low, and it may not be desirable to have results converge to within p of a negligible flux. If the calculated sample size is greater than 100, the user can use either re-sample their trajectory to create a file with the required number of state vectors, or they can use MEM's built-in random draw option to specify the sample size (if the size of the existing trajectory file exceeds the sample size). Please note that MEM runs in high-fidelity mode when generating standard deviation files. Thus, the user should also use the high-fidelity option for their final run. If the calculated sample size is less than 100, the user can proceed to use their 100-state-vector results; there is no need to generate a smaller, coarser run. §.§ Sample size and skewness The central limit theorem does not require that the underlying distribution be normal. However, if the underlying distribution is non-normal, there is a sample size, usually given as 30, at which the sample is considered “large enough” for Eq. <ref> to apply. The bottom panel of Fig. <ref> hints at this requirement: when n < 10, there is a wider spread in the residuals. However, 30 state vectors may not be “large enough” if the distribution is extremely skewed. Unfortunately, we were unable to locate any quantitative guideline as to what constitutes a sufficiently large sample size as a function of skewness. We therefore decided to construct our own, empirical formula. We used the 104 univariate continuous distributions available in SciPy version 1.10.0 <cit.> as the basis for our experiment. For each distribution, we randomly generated 50 sets of positive shape parameters { q_0, ..., q_m } using: q_i = - ln u_i where u_i is a uniform random variate in the range [0, 1]. The parameter count m is determined by the shape attribute of the distribution. Not all distributions are compatible with our approach. We excluded those whose shape attribute is “None” – that is, whose only shape parameters are a location and a scale – and those that consistently produced errors when used with our random parameters (possibly due to a restriction in range of allowed parameters). We also excluded those that required longer than 0.1 s to generate 1000 random variates. For each distribution and set of random shape parameters, we first generated 1000 random variates and estimated the skewness using the sample skewness (also known as the adjusted Fisher-Pearson skewness coefficient): G_1 = n/(n-1)(n-2)∑_i=1^n ( x_i - x̅/s)^3 We then generated a second set of random variates with the same distribution and shape parameters, but now with a random sample size, n_samp between 5 and 10,000. We repeated this random drawing with the same sample size 1000 times and computed the mean of each sample. Finally, we performed a test for normality <cit.> and recorded the p-value (see Fig. <ref>). We find that n ≥ n_γ = 30 |G_1|^2.4 appears to be a reasonable predictor of whether the sample means follow a normal distribution and thus the central limit theorem applies; the black line in Fig. <ref> corresponds to this equation. We stress that this is an empirical rule and we cannot guarantee that it applies outside of the sampled range or to all possible univariate distributions. MEM does not currently report the sample skewness, although users can compute it themselves by choosing the “intermediate files” option. These intermediate files report all flux outputs for each state vector used. §.§ Application to example trajectories In this section, we apply the methods outlined in Sections <ref> and <ref> to the trajectories in our library. In each case, we performed an initial MEM run using a random sample of 100 state vectors from the master trajectory. We then use equations <ref> and <ref> to estimate the number of state vectors needed for the central limit theorem to apply and for the reported flux values to lie within 1% of that of the entire trajectory with 68% confidence (i.e., we select p=0.01 and z=1). We apply these criteria not only to the overall flux on the spacecraft, but to the surface fluxes reported in the “cube_avg.txt” files for a “body-fixed” output frame. However, those spacecraft in low orbits around Earth or other bodies will have a much lower flux on their nadir-facing surface. We'd like to avoid scenarios in which the estimated sample size is inflated by requiring 1% precision on a negligible flux; therefore, we instead require that all fluxes converge to 1% of the maximum surface flux: n_σ, j = ( s_j/δ)^2,  where δ = 0.01 sup_j x̅_j We also use Eq. <ref> to compute n_γ, j for each surface. Table <ref> reports n_σ,j and n_γ,j for the ISS total and surface fluxes due to both of MEM's meteoroid populations (i.e., the high- and low-density populations). In this case, we find that the number of state vectors required for the assumption of normality never exceeds the canonical recommendation of 30; this is not, however, true for every spacecraft. The largest sample size requirement in the table is 2883 and this is therefore the number of state vectors sampled for the ISS example in our run library. In Table 5, we provide the recommended number of state vectors for each spacecraft as well as the surface and meteoroid population that drives that recommendation. For instance, we list n=2883, “Sun,” and “high” for ISS because the largest number in Table <ref> is 2883, and that number is derived from the flux of high-density meteoroids on a Sun-facing surface. In almost all cases, the recommended number of state vectors is driven by the precision requirement (Eq. <ref>). However, we did encounter an exception: our 100-state vector run for Venus Express had a high skewness value for Sun-facing and anti-Sun-facing surfaces (| G_1 | = 4.6 and 4.2, respectively). A skewness of 4.6 requires over 1000 state vectors for the central limit theorem to apply. This appears to arise because Venus Express's orbit is eccentric and does not precess in a Sun-synchronous fashion. Therefore, the orientation of a Sun-facing surface changes with respect to both the spacecraft's orbit about Venus. At times, it is heavily protected by planetary shielding, and at others, it is both completely unprotected and partially aligned with the spacecraft's motion. These special orientations produce long and asymmetric tails in the flux distribution. MEM users tasked with assessing the risk on a surface with a similarly varying exposure pattern should consider requiring a minimum of 1000 state vectors. §.§ Future work MEM 3 includes an optional standard deviation calculation; the code uses the <cit.> and <cit.> algorithms to compute these standard deviations in a single pass. Based on the results presented in this section, we now plan to add a skewness calculation. We will then use the standard deviation and skewness to warn users when their trajectory is likely to produce results coarser than a given precision. § DATA AND VISUALIZATIONS This section describes the full set of data files and graphical visualizations that we have made available in our online meteoroid environment library. §.§ Data provided In Section <ref>, we described the steps used to generate or download our example trajectories. However, for those users who have no desire to repeat this process, we provide data files containing the corresponding state vectors. Users can “click here to download the trajectory.” These files have been formatted for use with MEM. We also provide an “options file” that documents the set of choices we selected for each MEM run. These options files are in plain text format and can be used as a reference in conjunction with use of the MEM graphic user interface, or they can simply be copied into the user's MEM directory for use with the MEM command-line executable. Either method will result in the user exactly replicating our MEM run results. For the sake of conserving storage space, we have not posted the MEM output files themselves on the MEM library website. §.§ Visualizations For each sample environment, we have generated a similar set of visualizations that include [1] three views of the trajectory in an inertial reference frame; [2] three views of the trajectory in a Sun-centered ecliptic reference frame; [3] a bar chart of the flux on flat surfaces with nine specific orientations; [4] a histogram of the impact speed distribution; [5] a histogram of the meteoroid bulk density distribution; and [6] heat maps of meteoroid directionality relative to the spacecraft. The sample visualizations we present in this paper contain the same information (or a subset of the same information) as those in our online library, but the font size has been reduced. Our online versions are optimized for the web and have large, serif labels and appear in Scalable Vector Graphics (SVG) format. A full set of visualizations for the ISS trajectory is presented in Fig. <ref>; we discuss each graphic in this section. §.§.§ Trajectory in inertial frame In the first row of Fig. <ref>, we present a visualization of the ISS trajectory from three angles in an inertial reference frame. In most cases, the axes of the inertial reference frame are aligned with the International Celestial Reference Frame (ICRF), in which the x axis is close to aligned with Earth's dynamical vernal equinox and the z axis is close to aligned with Earth's pole at the J2000 epoch. The sole exception is that of the MAVEN Earth-to-Mars transfer trajectory, which is depicted in an ecliptic reference frame. In each case, the “inertial” frame is centered on the most relevant massive body; in the case of the ISS, this is Earth. Thus, the ISS trajectory is depicted in the Earth-centered inertial (ECI) frame. We note that these reference frames are not truly inertial: each planet orbits the Sun and the Sun itself orbits the galactic center. However, we follow the convention of referring to these coordinates as inertial. §.§.§ Trajectory in SCE frame While ECI coordinates are useful for describing the position of Earth-orbiting spacecraft, they have little relation to the directionality of the meteoroid environment. Sporadic meteoroid radiants are clustered into so-called “sources” that maintain the same directionality only when viewed in a Sun-centered ecliptic frame <cit.>. This coordinate system is a non-inertial one in which the +x direction points toward the Sun and the +z direction points toward ecliptic north. Inertial (ICRF) spacecraft coordinates can be converted to Sun-centered ecliptic coordinates as follows. First, we use the jplephem Python package <cit.> in conjunction with the DE430 ephemeris <cit.> to compute the ecliptic coordinates of the central body (in the case of the ISS, Earth). The solar longitude is then computed as follows: λ_⊙ = arctan2(-y_cb, -x_cb) We also perform a 23.4^∘ rotation about the x-axis to align our spacecraft trajectory with the ecliptic. We then add the spacecraft position to that of the central body to obtain the heliocentric ecliptic position of the spacecraft. We next rotate the heliocentric ecliptic coordinates of both the central body and the spacecraft about the z-axis by both the central body's heliocentric ecliptic coordinates and the spacecraft's heliocentric coordinates by λ_⊙. Finally, we subtract the coordinates of the central body from that of the spacecraft. The resulting coordinates give the position of the spacecraft relative to the central body in a coordinate frame that is aligned with the ecliptic and has an x-axis that always points toward the Sun. The “Sun-centered ecliptic” name is somewhat confusing in this case, as the coordinate frame is positionally centered on the central body, but the orientation of the x-axis is centered on the Sun. In our graphics, we use the subscript “SCE” to indicate that the x-axis is oriented towards the Sun, and we subtract x_⊕ or x_SCE,⊕, for instance, to indicate that position is measured relative to Earth. The second row of Fig. <ref> displays the position of the ISS relative to Earth in an SCE coordinate frame. We see that we have sampled a wide range of possible Sun-Earth-spacecraft positions and thus will have a good average meteoroid environment description for spacecraft on this type of orbit. §.§.§ Surface flux bar chart We next present a bar chart that provides the flux of microgram-or-larger meteoroids onto different surfaces of a cubic spacecraft (third row of Fig. <ref>). We assume that these surfaces maintain the same orientation relative to the spacecraft's orbital motion; for instance, we assume that the “ram” surface always faces in the direction of the spacecraft's orbital velocity. Readers may wish to consult Section 4.1.1 of <cit.> for a more in-depth discussion of this coordinate frame, which is referred to in that document as “body-fixed.” We would like to acknowledge here that “body-fixed” is not the standard term for this reference system; when the central body is a planet or Moon, MEM's body-fixed coordinate frame is better known as a VNB (velocity/normal/binormal) reference frame.[https://gmat.sourceforge.net/docs/R2016a/help.html] The first plot in the third row of Fig. <ref> provides the flux of microgram-or-larger meteoroids on surfaces facing along each positive and negative axis of the body-fixed or VNB coordinate system. We also include the flux on surfaces facing Earth and towards or away from the Sun; these fluxes may be relevant for assessing the risk posed to communications equipment or solar panels. MEM includes three dynamical populations with two different bulk density distributions. The high-density population originates from Jupiter-family comets and is also called the helion/anti-helion population, and the low-density population consists of the apex and toroidal populations which originate from long-period and Halley-type comets, respectively <cit.>. In these visualizations, we stack the flux from the two distributions so that users can view the total flux. In Section <ref>, we made multiple comments about how certain spacecraft surfaces can be relatively protected from the meteoroid environment due to planetary shielding. The surface flux bar chart can be used to visually assess how strong this effect is; for instance, Fig. <ref> shows us that the flux on the nadir-facing surface of ISS is roughly an order of magnitude lower than that on the zenith-facing surface. §.§.§ Mass distribution Readers may notice that we do not include a mass distribution plot. The reason is that the mass distribution does not vary between trajectories or meteoroid populations, and therefore always resembles Fig. 1 of <cit.> in shape. All runs in our library use the default minimum mass of 1 μg <cit.>. §.§.§ Speed distribution The third row of Fig. <ref> also includes a histogram of meteoroid speeds (center plot). These speeds are relative to the spacecraft; the spacecraft's velocity has been taken into account. We present the speed distribution for each density population as well as the overall speed distribution. In this case, the speed distribution is that over the entire spacecraft, assuming that it does not maintain any particular orientation. Orientation-specific speed distributions are available as a MEM output. §.§.§ Density distribution The last plot in the third row of Fig. <ref> displays the distribution of bulk density for each meteoroid population. Bulk density plays a role in certain ballistic limit equations <cit.>. §.§.§ Directionality maps The last row of Fig. <ref> contains two directional flux maps. These maps use a color scale to show how the flux (per 25-square-degree bin) varies with impact angle. The azimuthal angle measures the angle counterclockwise from the +x-axis within the x-y plane, and the elevation angle measures the angular offset from the x-y plane. Notice that we have centered the azimuthal axis on the direction of motion and reversed the azimuthal axis: this results in an “inside-out” view of meteoroid directionality in which port appears to the left of the direction of motion. In this case, we see that the ISS encounters little-to-no flux from “below.” This is because the ISS has a fairly low altitude: 400 km above Earth's surface and 300 km above the altitude at which Earth's atmosphere is capable of blocking meteoroids. Thus, the ISS is protected from a large portion of the meteoroid environment due to Earth's shielding effects. § SUMMARY This paper announces the availability of a new online meteoroid environment library.[https://fireballs.ndc.nasa.gov/mem/library/] This library includes a meteoroid environment description for at least one spacecraft orbiting every major body in the inner solar system as well as one planetary transfer trajectory and two Lagrange halo orbits. We provide visualizations of the meteoroid flux generated with the Meteoroid Engineering Model, version 3 (MEM 3), and our library includes the input files needed to replicate these runs. Our hope is that this online library will be a useful tool to aerospace engineers who are conducting meteoroid risk assessments. Our graphics provide a visual guide to the meteoroid environment encountered by spacecraft; users might also choose to compare flux plots to get an idea of how much meteoroid flux, directionality, and speed varies between different types of trajectories or near different planets. The included input files can be used as test runs for a new installation of MEM, and the resulting output files may be useful to users developing risk analysis tools. The output may even be used for preliminary risk assessments, such as obtaining a rough estimate of the meteoroid flux encountered by an Earth-orbiting satellite on a Sun-synchronous orbit. Please note, however, that these example runs should not be substituted for a mission-specific trajectory and MEM run in a final or “official” risk assessment. It is possible that our library may also have scientific applications. For instance, the James Webb Space Telescope (JWST) has experienced a number of meteoroid strikes since its deployment in early 2022.[https://blogs.nasa.gov/webb/2022/06/08/webb-engineered-to-endure-micrometeoroid-impacts/] The JWST entry in our library allows meteoroid researchers to examine the pattern of meteoroid encounters predicted by MEM for JWST and to compare it with that of other models. In the course of generating these trajectories and MEM runs, we have also developed a method to determine the trajectory size, in number of state vectors, needed to achieve a desired flux resolution. Users can adopt the approach outlined in Section <ref> to ensure that their trajectories are just detailed enough for their needs. This helps users avoid runs that have too low a resolution and also avoid the excessively time-consuming runs that result from trajectories with far too many state vectors. It should be noted, however, that this precision does not reflect the uncertainty in the meteoroid environment, which is believed to be roughly a factor of 2-3 near 1 au <cit.>. A recommended implementation of the estimated environment uncertainty will be the subject of a future paper. § ACKNOWLEDGMENTS This work was supported in part by NASA contract 80MSFC18C0011 and the NASA Marshall Space Flight Center's internship program.
http://arxiv.org/abs/2405.09090v1
20240515045209
Towards Next-Generation Steganalysis: LLMs Unleash the Power of Detecting Steganography
[ "Minhao Bai. Jinshuai Yang", "Kaiyi Pang", "Huili Wang", "Yongfeng Huang" ]
cs.CR
[ "cs.CR" ]
Atomic transport dynamics in crossed optical dipole trap Project supported by the National Natural Science Foundation of China (Grants No. 92365208, No. 11934002 and No. 11920101004), the National Key Research and Development Program of China (Grants No. 2021YFA0718300 and No. 2021YFA1400900), the Science and Technology Major Project of Shanxi (Grant No. 202101030201022), and the Space Application System of China Manned Space Program. Peng Peng ^1, Zhengxi Zhang ^1, Yaoyuan Fan ^1, Guoling Yin ^3, Dekai Mao ^1, Xuzong Chen ^1, Wei Xiong ^1,* and Xiaoji Zhou^1,2,3,* Corresponding author. E-mail:xjzhou@pku.edu.cn ^1State Key Laboratory of Advanced Optical Communication System and Network, School of Electronics, Peking University, Beijing 100871, China ^2Institute of Carbon-based Thin Film Electronics, Peking University, Shanxi, Taiyuan 030012, China ^3State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Opto-Electronics, Shanxi University, Taiyuan 030006, China May 20, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Linguistic steganography provides convenient implementation to hide messages, particularly with the emergence of AI generation technology. The potential abuse of this technology raises security concerns within societies, calling for powerful linguistic steganalysis to detect carrier containing steganographic messages. Existing methods are limited to finding distribution differences between steganographic texts and normal texts from the aspect of symbolic statistics. However, the distribution differences of both kinds of texts are hard to build precisely, which heavily hurts the detection ability of the existing methods in realistic scenarios. To seek a feasible way to construct practical steganalysis in real world, this paper propose to employ human-like text processing abilities of large language models (LLMs) to realize the difference from the aspect of human perception, addition to traditional statistic aspect. Specifically, we systematically investigate the performance of LLMs in this task by modeling it as a generative paradigm, instead of traditional classification paradigm. Extensive experiment results reveal that generative LLMs exhibit significant advantages in linguistic steganalysis and demonstrate performance trends distinct from traditional approaches. Results also reveal that LLMs outperform existing baselines by a wide margin, and the domain-agnostic ability of LLMs makes it possible to train a generic steganalysis model[Both codes and trained models are openly available in <https://github.com/ba0z1/Linguistic-Steganalysis-with-LLMs>]. Linguistic Steganalysis, Large Language Models, Generative Steganalysis. § INTRODUCTION Steganography is the technology to hide messages within normal information carriers such as images <cit.>, audios <cit.>, or texts <cit.>. It allows individuals to covertly transmit secret messages by embedding them within seemingly innocent content. With the most advanced generative linguistic steganography, a paragraph of text can embed a secret message of several hundred bits, making it possible to transmit complex information. It is within these inconspicuous sentences that messages are covertly conveyed. Due to the peculiar nature of this technology, the abuse of steganography raises security concerns within societies, leading to significant anxiety among the general public <cit.>. Unethical individuals may employ steganography to convey malicious information, such as terrorists using steganography to plan attacks <cit.>, or hackers using it to distribute virus software <cit.>. Therefore, the detection of steganographic carriers containing secret messages, namely steganalysis, has emerged as a widely discussed area in contemporary information security. Understanding steganography is the foundation of steganalysis. Thanks to the widespread usage of texts and the extraordinary achievement of AI generation techiques, generative linguistic steganography has become a popular choice for research and practice. Generative linguistic steganography embeds secret messages into automatically generated texts. The advantage lies in the absence of a predefined carrier, resulting in enormous flexibility and a high embedding rate. In the text generation process, a codebook is formed using the conditional probability distribution predicted by a language model. To construct the codebook, researchers have tried various methods including heuristic rules <cit.>, source coding based methods <cit.> and distribution preserving methods <cit.>. After the codebook is determined, the appropriate tokens are selected based on the secret messages. Constructing a secure codebook is critical to the imperceptibility of steganographic texts, especially in the era that large language models can build good enough conditional probability distribution. Among these codebook construction methods, heuristic methods <cit.> expose significant statistical distribution difference between steganographic texts and normal texts, source coding based methods <cit.> can make the deviation small, while distribution preserving methods <cit.> can make the deviation tiny enough to near zero. To distinguish between the steganographic texts (also called stegos) and the natural texts (also called covers), these existing linguistic steganalysis methods depend on statistical features to be aware of the distribution deviation, acting as classifiers. The earlier methods depend heavily on heuristic statistical features <cit.>, unaware of the deep features of texts. Recent approaches focus on assembling existing neural network modules to get different learning-based deep features of texts <cit.>, while several recent works <cit.> choose to use pre-trained models like BERT <cit.> as the base module. These approaches seriously rely on the distributional consistency between training texts and target texts, which is difficult to accurately achieve. For heuristic codebook construction methods, it is easy for existing steganalysis methods to detect steganographic texts. However, when these recent methods meet source coding-based method and distribution preserving methods, especially the latter, they can only find little distribution difference and then show weaker detection performance. As shown in Fig. <ref>, the detection accuracy of these existing methods is below 75%, leaving a room for further improvement. There are two key reasons prevent these methods from accurately perceiving the potential distribution differences in realistic world. Firstly, building precise distribution of stegos demands massive annotated text. Secondly, pinpointing the distribution difference demands oracle distribution of normal texts in target scenario. The two excessive demands force these methods relied on distribution difference to only obtain unsatisfactory performance. Actually, there exists the contradiction between statistical distribution differences and text perception rationality for almost all the codebook construction methods <cit.>. This indicates that rationality of text deteriorates as distribution differences decrease, which suggests a clue for steganalyzers. Although evaluating rationality of text is not very difficult for humans, almost all the traditional steganalysis methods trained as statistical classifiers seem to lack this ability. As shown in Fig. <ref>, current steganalysis methods are easily fooled by low-quality stegos and irrational stegos, while these stegos can be distinguished by human. However, LLMs have shown human-like abilities in many aspects, including evaluating fluency and rationality of text <cit.>. For example, ChatGPT has been widely used to annotate high-quality text, replacing human annotations. Based on the above observations, a natural thought emerged: we expect that the human-like capabilities of LLMs can help steganalyzers to achieve more accurate detection, thus overcoming the performance dilemma of current linguistic steganalysis methods. However, exploiting the capabilities of LLMs for linguistic steganalysis poses significant challenges and costs. On the one hand, activating the capability of LLMs is not easily available. Researchers have explored the potential of using ChatGPT for steganalysis tasks <cit.>, which shows mediocre performance due to lack of focused design and training. On the other hand, due to the vast parameters of modern LLMs, direct training requires significant time and financial investment, with the risk of losing its original capabilities. To seek a feasible way to construct practical steganalysis in realistic world, this paper systematically explores the LLMs for linguistic steganalysis. To hold human-like ability of LLMs as possible, we propose to generate readable detection results. To fully activate the domain-specific steganalysis capability of LLMs, we fine-tune LLMs with various instructions, while we equip LLMs with domain-agnostic steganalysis capability by exploring appropriate mix-up of various prevalent stego texts. To achieve acceptable training costs, we adopt the lightweight training strategy of LLMs, which only requires a single GPU. Extensive experimental results show that after lightweight fine-tuning LLMs obtain the state-of-the-art linguistic steganalysis performance, in both domain-specific and domain-agnostic scenarios. We conduct extensive experiments on exploring the LLMs for linguistic steganalysis, and we find: * The ability to detect stegos is just embedded in LLMs and fine-tuning can activate this ability. LLMs with instruction fine-tuning achieve much higher accuracy in detecting steganographic text than using the steganalysis model with BERT, while the trainable parameters are much less than with BERT. * Based on extensive pre-training datasets, our models exhibit exceptional detection capabilities across diverse steganographic algorithms and datasets. The proficiency stems from the LLMs' capacity to evaluate the fluency and rationality of the text. * Leveraging the cross-domain capability of LLMs, a general steganalysis model is trained on a blender of datasets. In most scenario, the performance of our general model on unseen dataset is superior to baseline methods trained on this dataset. § RELATED WORK §.§ Linguistic Steganography Linguistic steganography can be classified into three primary categories: retrieval-based, modification-based, and generation-based linguistic steganography. For retrieval-based text information hiding methods, such as <cit.> and <cit.>, the fundamental procedure includes encoding samples from a text corpus and subsequently choosing relevant sentences for transmission, depending on the secret messages to be embedded. Nevertheless, the need for pre-shared codebooks among communicating parties restricts its capacity and reduces its flexibility. The fundamental concept of modification-based linguistic steganography involves making subtle changes to a given text by using secret messages as control signals. Techniques like synonym substitution <cit.> and sentence paraphrasing <cit.> have been explored. In recent years, generative linguistic steganography methods can be classified into the following categories: A. Coding methods based on rules, such as the early approach <cit.>, involve designing predefined rules to directly encode tokens. However, these methods solely rely on rule design and lack theoretical proof for statistical imperceptibility. B. Coding methods that modify the distribution, such as truncation. This method first disrupts the conditional probability distribution and then source coding techniques like fixed-length coding (FLC) <cit.>, Huffman coding (HC) <cit.>, and arithmetic coding (AC) <cit.> are applied to the disrupted distribution. These methods greatly undermine the statistical covertness of the steganographic text as they disturb the distribution. C. Coding methods that preserve the distribution aim to ensure provable imperceptibility under KL divergence by achieving consistency in the distribution between steganographic and natural text. To achieve this goal, researchers have considered distribution shifting <cit.>, entropy coupling <cit.>, grouping <cit.>, and mapping <cit.>. §.§ Linguistic Steganalysis The first-generation steganalysis methods were primarily based on heuristic statistical features. These studies <cit.> focused on modification-based steganography methods, such as synonym substitution techniques that were popular at that time. However, these methods became less effective with the continuous improvement of language models. In recent years, second-generation steganalysis methods <cit.> have focused on generative linguistic steganography. These methods employ deep learning models to differentiate between steganographic and natural texts using statistical features <cit.>. The attention mechanism is utilized in <cit.> to extract the relationship between word embeddings, followed by the use of a classification network with a single activation unit to output the classification results. TS-CSW <cit.> proposes a model based on convolutional sliding windows, which can extract correlation features using multiple window sizes. Additionally, TS-RNN <cit.> observes that the conditional probability distribution of each word in steganographic texts becomes distorted after secret messages are embedded. To address this, the method proposed in <cit.> employs recurrent neural networks to extract these distribution differences and subsequently classify texts as non-steganographic or steganographic. Due to the limited classification ability of simple neural networks, subsequent studies adopted increasingly complex deep learning models. A model based on a graph neural network (GNN) was proposed by <cit.>, where texts were transformed into graphs. In these graphs, nodes represented words, and edges captured associations between them. EILGF <cit.> focused on extracting and integrating local semantic features and global long-term dependencies to enhance the quality of text representation. The improved representation facilitated more accurate classification. <cit.> involved fine-tuning a BERT extractor through the hierarchical supervised learning that combines signals from multiple softmax classifiers, rather than relying solely on the final one. <cit.> combined the cross-entropy loss and the supervised contrastive loss to guide model training to obtain better results. With the development of text steganography techniques, differentiating statistical features between steganographic and natural texts has become more challenging. This difficulty arises due to the existing deep learning models' struggle to capture the intricate and overlapping boundaries between the two types of text. Moreover, as LLMs can now generate text that is highly similar to natural text, the effectiveness of current deep learning methods is gradually declining. The potential solution lies in constructing the third-generation steganalysis methods based on LLMs, leveraging their powerful capabilities to develop a specialized model in steganalysis. §.§ Large Language Models Commonly, large language models (LLMs) are referred to as language models with billions of parameters that are trained on massive amounts of text. Examples of such models include GPT-3 <cit.>, PaLM <cit.>, BLOOM <cit.>, LLaMA <cit.> and so on. LLMs are specifically constructed using the Transformers <cit.> architecture, which stacks multiple self-attention layers in a deep network. LLMs possess excellent natural language understanding abilities and can produce high-quality text based on given prompts. They can perform tasks such as generating code, converting formatted data into JSON files, and extracting key information from articles. Due to their powerful functionalities, smaller language models like Alpaca <cit.> and Vicuna <cit.> even utilize API interfaces to directly access data from ChatGPT and GPT-4 <cit.> for training, aiming to attain similar language capabilities. According to a previous study <cit.>, using ChatGPT for steganalysis tasks achieved comparable results to the second-generation steganalysis method without BERT, with just 32 examples in the prompt. Training LLMs on a dataset containing steganographic and natural texts is expected to yield improved steganalysis results. Directly training LLMs can be costly and complex, while fine-tuning LLMs is the most frugal option. The current fine-tuning method, LoRA <cit.>, decomposes the parameter matrix of the model into two low-rank matrices, and creates a parallel LoRA branch of these matrices. The outputs of the LoRA branch are then merged with the hidden layers output of the original model. During training, the parameters of the original model are frozen, and only the two low-rank matrices in the LoRA branch are trained. This significantly reduces the parameter size, often to 1% or lower. In most cases, the effects of LoRA fine-tuning and full fine-tuning are comparable. Considering resource efficiency and convenience, this paper adopts the LoRA method to fine-tune the LLMs. § METHODOLOGY §.§ Models & Config We employ LoRA <cit.> to fine-tune the LLMs and directly get the detection results from the LLMs. We utilized the Bloomz-7B1 <cit.> and the Llama-7B <cit.>. Both models are openly available in <https://huggingface.co/>. To account for our limited GPU capabilities and the trade-off between LoRA hyperparameters and training cost, we opted for LLMs with training parameters smaller than 7B. Larger models will make our GPUs out of memory. To guarantee reliable and pragmatic experimental findings, we opted for the most sizeable LLMs that our GPUs could support - the Bloomz-7B1 and Llama-7B (or Bloomz/Llama for convenience). The structure of LLMs with LoRA fine-tuning is illustrated in Fig. <ref>. We froze all LLM parameters during the training process and exclusively trained the parameters of the LoRA branch. Thus, the NLL (Negative Log Likelihood) loss of LLM can be utilized to optimize the detection ability. Moreover, since the LLM parameters were not trained, the ability of original LLM would never decrease. This fine-tuning method guaranteed that our models could leverage the full human-like capability of original LLMs to make more precise detection. With the LoRA fine-tune framework, trainable parameters of LLMs can be reduced to less than 0.1%, which is much less than that of BERT. Table <ref> shows the trainable parameters of our models and BERT-base. Fixed hyperparameters used in training is presented in Table <ref>. According to the report from BigScience <cit.>, the learning rate of Bloomz is set to 2e-5 during the multi-task fine-tuning phase. Hence, the learning rate for any further fine-tuning should not surpass 2e-5. The batch size is set to 4 and the lora rank is set to 8, which signifies the optimal configuration for executing the Bloomz-7B1 model on a single RTX3090 GPU. Under these conditions, the model has displayed robust performance, exhibiting the potential of LLMs in steganalysis tasks. The hyperparameter selection process of Llama is similar to that of Bloomz. Further refinement of these hyperparameters may improve results. The previously mentioned parameter settings are intended to minimize training time and demand for GPU capacity. §.§ Datasets We utilize three datasets: Tweet, Movie, and News. We have trained copies of GPT-2 models on each dataset. Steganographic texts are generated using three different embedding methods: AC(Arithmetic Coding) <cit.>, HC(Huffman Coding) <cit.>, and ADG(Adaptive Dynamic Grouping) <cit.>. We randomly sampled 10,000 sentences each from natural text and steganographic-generated text to construct our datasets. These datasets are labeled as Movie-Natural, Movie-AC, Movie-HC, Movie-ADG, Tweet-Natural, Tweet-AC, Tweet-HC, Tweet-ADG, News-Natural, News-AC, News-HC, and News-ADG. The first part of each label represents the data source, and the second part represents the steganographic encoding algorithm. Table <ref> presents the statistical features of each dataset. We modeled the distributions of datasets in terms of sentence length and perplexity. Sentence length is the number of tokens that the text breaks down under the tokenizer of the model. And perplexity (PPL) is a widely employed metric to assess the fluency of text, whereas entropy represents a statistical measure of the information content. The formulas for PPL and entropy can be expressed as follows. PPL = exp(-1/N∑_i=1^Nlog p(x_i|x_1:i-1)) where N represents the total number of tokens, and p(x_i) denotes the conditional probability assigned to each token x_i. It is evident that the statistical characteristics of the Tweet dataset differ significantly from those of the Movie and News datasets. The average length of the Tweet is less than half of the average length observed in the Movie and News datasets, whereas the PPL is nearly 3 times higher. This suggests that the texts in Tweet are disorganized and confused. In contrast, the statistical characteristics of Movie and News exhibit relatively similar patterns. The three steganographic encoding algorithms display distinct characteristics. The steganographic texts created by HC possess the lowest PPL, surpassing even natural texts. Conversely, the steganographic texts produced by AC closely emulate the statistical properties of natural texts. In the case of ADG-generated texts, they demonstrate a similar pattern to AC-generated texts but with a higher perplexity. It should be noted that PPL is not considered the definitive criterion for assessing the quality of text generation. Upon manual evaluation, both the steganographic texts produced by AC and ADG exhibit considerable similarity. PPL merely offers a concise overview of the distribution of text in the dataset and is inadequate for discerning between high and low-quality text. To more accurately characterize the distribution of the dataset, we calculated the log probability of sentences, then used log probability as the x-axis and sentence length (counted by tokens) as the y-axis to model the distribution of text. The distribution of log probability and sentence length can be viewed as a simplified 2-dimensional text probability distribution. The log probability can be calculated as Equation <ref>: log probability = -∑_i = 1^Nlog p(x_i|x_1:i-1), where N denotes the length of the sentence. As illustrated in Fig. <ref>, <ref>, <ref>, and <ref>, the distribution of the Tweet dataset is primarily focused on short sentences with a relatively high log probability. Although the distributions of the Movie and News datasets are similar, the main discrepancy lies in the fact that the long-tailed distribution of the Movie dataset constitutes a larger proportion. To elucidate the influence of steganographic algorithms on the distribution of text, we standardized the logarithmic probability of AC/HC/ADG using the mean and standard deviation of natural text. Based on Fig. <ref>,<ref>,<ref>,<ref>, it is clear that the distribution of stegos produced by HC is lower than that of the other distributions. The distribution of edges in HC stegos reveals that HC tends to produce text that significantly deviates from natural human writing patterns. This is due to its peak position of log probability is less than 0, resulting in a noticeable distinguishability of HC-generated stegos. Conversely, AC and ADG stegos display a nearly identical distribution; however, the AC distribution appears to be more focused than that of ADG. §.§ Evaluation Metrics We denote the output results of the model as TS (True Steganographic), FS (False Steganographic), US (Unknown Steganographic), TN (True Non-Steganographic), FN (False Non-Steganographic), and UN (Unknown Non-Steganographic), where US and UN represent samples that do not match the output template. The occurrence of US and UN is not unusual in <cit.>, but in our models they only appear when the models are not fine-tuned. We primarily use accuracy and F1 score as evaluation metrics. Since the existence of US and UN samples, redefining the calculation of metrics is necessary. The accuracy is redefined as follows: Acc = TS+TN/TS+FS+US+TN+FN+UN. Due to the equitable sampling of steganographic and natural texts in our dataset, and taking into account the balance between recall and precision, we utilize the F1 score with β = 1 as the principal evaluation metric for steganalysis. The F1 score is obtained as follows: Precision = TS/TS+FS+US, Recall = TS/TS+FN, F1 = (1+β^2) ·Precision · Recall/β^2 · Precision + Recall, β = 1. The expression for F1 can be obtained by substituting the formulas for Precision and Recall, as follows: F1 = 2 · TS/2 · TS + FN + FS + US. §.§ Prompt Design We primarily employ the following templates to construct the instruction fine-tuning dataset: ### Text:{input sentence} ### Question: Is the above text steganographic or non-steganographic? ### Answer: {input label} The effects of the prompts and their impact on steganalysis will be thoroughly discussed in Section <ref>. § EXPERIMENTS AND ANALYSIS To verify the effectiveness and generality of our method, we first tested in domain-specific steganalysis task and compared the results with current advanced steganalysis methods. Then we analyzed the error samples of this task and tried to find some clues for training a general steganalysis model. After that, we tested our methods in a more practical and challenging scenario of domain-agnostic steganalysis. To figure out the critical factors that affected the performance of our method, we investigated the detection accuracy on sentences of various length and different training prompts. Finally, we trained a general steganalysis model that was effective in diverse tasks. The baseline steganalysis methods used for comparison are as follows: TS-FCN <cit.> utilizes a single-layer fully connected network to discern statistical correlation at the word level. TS-RNN <cit.> employs a two-layer bidirectional RNN to decide on temporal connections between words. TS-CSW <cit.> incorporates a multi-scale kernel CNN in its discriminator, enabling the recognition of statistical features of various sizes. RBiLSTMC <cit.> combines the advantages of Bi-LSTM and CNN to enhance detection accuracy. LSTMATT <cit.> utilizes the attention mechanism. BiLSTMDENSE <cit.> improves low-level features through dense connections and feature pyramids. EILGF <cit.> extracts and integrates local and global features. These methods encompass classical and efficient steganalysis models, as well as the latest and state-of-the-art steganalysis models. §.§ Domain-Specific Steganalysis We conduct domain-specific steganalysis experiments within a dataset of steganographic texts and a corresponding dataset of natural texts. These 2 datasets are divided into training, validation, and test sets in a ratio of 3:1:1. This ratio is consistently maintained for all subsequent experiments. The Bloomz/Llama and the baseline models were compared on all datasets, as shown in Table <ref>. The result show that without fine-tuning the LLMs are failed to demonstrate the capability of steganalysis. After fine-tuning the detection accuracy and F1 score of LLMs outperform all baseline methods on all datasets. Fig. <ref> shows a comparison of the Bloomz training with 3/10 epochs, Llama training with 3/10 epochs, and the baseline methods training with 10 epochs. Results show that both Bloomz and Llama consistently outperform the baseline methods in all scenarios, with significant improvements in AC and ADG. It is imperative to clarify that the validation set utilized in our experiments was only used for comparison with baseline methods and not incorporated by our models. The results obtained from the Movie-ADG dataset reveal that Bloomz yields a superior accuracy rate of approximately 13% and an F1 score that is around 16% higher than the optimal baseline model. Moreover, the more effective Llama poses an accuracy rate that is roughly 16% higher and an F1 score of approximately 19%, with a detection success rate of nearly 90%. Furthermore, detection accuracy on the three steganographic encoding algorithms achieved at least 90% in the Movie dataset, while not appearing the discrepancy seen in the baseline methods, where detection accuracy on HC-generated texts was notably higher than that of AC and ADG. In Llama's experiments, it was noted that in particular datasets (Movie-AC, Movie-ADG, News-HC, News-ADG) training for 3 epochs led to superior outcomes than training for 10 epochs. This phenomenon could be attributed to the repetition of training data. Our models' results of training for 1-10 epochs on the Movie-AC dataset are illustrated in Fig. <ref>. The figure indicates that Bloomz and Llama outperformed the BERT baseline models with just one epoch of training. Llama attained its highest accuracy in the fifth epoch, whereas it displayed a decreasing trend in accuracy with further training. On the other hand, Bloomz showed a gradual enhancement in accuracy over 10 epochs. Overall, Llama exhibited a detection accuracy that was about 3-5% higher than Bloomz. §.§ Error Analysis of Domain-Specific Steganalysis To analyze the sentences that are non-detected (stegos that are not detected by our models) and incorrectly detected (covers that are wrongly detected as stegos by our models) in the domain-specific tasks, we scatter their distributions in Fig. <ref>. In most cases, the normalized probability of the error samples is similar to the -Natural dataset, since their marginal distribution of normalized probability is concentrated near 0. The lengths of these samples are shorter than the average of -Natural dataset. Precisely, we find that for sentences of the same length, the lower the PPL the more likely it is to be misclassified by the model. From the distributions, it can be observed that a large proportion of texts is concentrated in a small region, and the error samples are located at the edge of this region. This phenomenon reminds us that these short and extremely fluent stegos are challenging samples for our models, which also proves that our models take fluency and rationality of text as standards of judgement. Most of the results show that the incorrectly detected samples are more than non-detected samples, which might be a warning sign of over-fitting. The detection accuracy of our models varies based on lengths of sentences. Fig. <ref> and <ref> show that sentences with less than 10 tokens have a detection rate of less than 80%. However, sentences with over 20 tokens have an accuracy of 90% or more. Additionally, sentences with more than 70 tokens have a detection rate of 100%, suggesting that our models are nearly faultless in those cases. There are some short samples presented in Table <ref>. It is almost impossible for humans to differentiate between these fragmented but authentic stegos and covers, and the mix-up of baseline methods is also struggling. The broad range of tweets does help to generate seemingly innocent steganographic texts, and the restricted length of tweets presents a greater obstacle for detection models. §.§ Domain-Agnostic Steganalysis We evaluated the domain-agnostic capability of two LLMs across various steganographic algorithms and datasets. This task consisted of training on one coding algorithm and testing on two other algorithms, or training on one type of dataset and testing on other datasets. Each experiment entailed 3 epochs of training on the dataset. The outcomes are displayed in Fig. <ref>, <ref>, <ref>, <ref>. All of the baseline methods mentioned above were also tested, and Fig. <ref>, <ref> show the highest F1 scores achieved by these models. Our experiment has demonstrated the superiority of our models in terms of its domain-agnostic ability compared with baseline methods, surpassing the performance of baseline models considerably. As a result, we can assume that our models can robustly derive more accurate steganographic characteristics from various sources of text generated by different encoding algorithms. These extracted features can be applied to a wide range of steganographic texts. The distribution of text across various datasets presents a significant challenge to our models' domain-agnostic capability. This is due to certain data being classified as steganographic in the Movie dataset but not necessarily so in the Tweet dataset. Such variations must be taken into account to ensure optimal performance. Furthermore, the pattern of Bloomz in the transfer experiments on different sources deviates significantly from that of the standard models. The Movie dataset has an average of 25.63 tokens per sentence and a perplexity value of 1059.65, whereas the News dataset has an average of 23.1 tokens per sentence and a perplexity value of 958. Despite having similar sentence lengths and perplexity values, the content of these two datasets is quite different. Bloomz performs significantly better when trained on the Movie-AC dataset and tested on the News-AC dataset, in contrast to the baseline models. Comparable results can be observed from the Movie and Tweet datasets, indicating that our model has learned unique steganographic features in comparison to the baseline models. According to Bloomz, the News dataset may be more comparable to the Movie dataset, whereas the Tweet dataset significantly differs from the Movie dataset. Conversely, the Llama views the News and Movie datasets as having minimal differences. Llama exhibits robust transfer performance between the encoding algorithms, with negligible performance loss between the AC and ADG algorithms. Nonetheless, it encounters considerable performance degradation in transfer tasks across datasets, potentially attributable to the restricted diversity of training data implemented for the Llama model. However, the Tweet dataset exhibits notably weaker transfer performance than the majority of datasets. As we mentioned before, part of our models' ability is from the perception of text fluency and rationality. While the sentences in Tweet dataset are often fragmented spoken expressions. We assume that the reason can be attributed to the complexity and non-normative nature of Twitter texts. §.§ Effectiveness of Prompts To assess the impact of prompts on models' detection performance, including inputting instruction options (separators, hints and questions) and outputting format options (such as long terms like “steganographic/non-steganographic", short terms like “stega/cover", simple responses like “yes/no", or even numerical answers like “0/1"), we employed the prompts presented in Table <ref>. As an example, we tested the effect of these prompts in Tweet+AC dataset, and the results are shown in Table <ref>. In the prompts containing a question (Prompt 1,2,3), the detection performance for the “Yes/No" labels is better than the performance for the “Steganographic/Non-steganographic" labels. The efficiency of Bloomz is further improved by incorporating the textual category information (“tweet") in the input prompt. However, there are some minor performance changes for Llama. For the prompts with only separators (Prompts 4 and 5), the effectiveness of Prompt 4 for generating long words is similar to that of the full prompt. On the other hand, Prompt 5, which outputs either 0 or 1, completely destroys the steganalysis capability of Bloomz, while Llama is still able to maintain a robust result. Regarding the results from prompts 6, 7, and 8, we can verify that the most suitable output for LLMs is to be linguistically precise, whereas meaning of “cover" is excessively broad, resulting in model outputs that deviate from our expectations. The difference in detection performance between numeric and long words is due to the model's understanding of steganographic/non-steganographic concepts. Owing to the resemblance of “Steganographic" and “Non-steganographic", the model produces lower losses for this output. Nevertheless, the use of these low-loss labels does not result in optimal detection performance. To summarise, we anticipate that all inputs and outputs closely emulate natural language. Additionally, these inputs should be rational and unambiguous for LLMs to comprehend easily. Ideally, the output labels should display considerable variation at the token level, have a unique semantic interpretation and conflicting meanings. §.§ Towards General Steganalysis After the sufficient experiments presented above, we found that there are 2 main challenges for LLMs in the steganalysis task. The first challenge is that short texts are more difficult to detect, which is also quite problematic for humans and baseline methods. The second challenge is that as the degree of text diversity increases, the decline in detection accuracy is difficult to compensate for. With sufficient experimental preparation, now it's time to train a generic steganalysis LLM. First, in terms of model training settings, we choose Llama-7B training for 3 epochs with Prompt 2. Then for the training dataset, we extract 10000 sentences from Movie-AC and 5000 sentences from Tweet-HC as steganographic dataset, along with 10000 sentences from Movie-Natural and 5000 sentences from Tweet-Natural as natural dataset. From the results of domain-agnostic task it can be observed that the Movie dataset contains various texts and the distribution of this dataset is the most spread out, which enables our model to learn a diverse range of textual features. Thus we choose the Movie dataset as the basis. To enhance the models' ability to detect short text, we chose a part of the Tweet dataset as supplementary. Under these settings, our model took 9 hours to train on one RTX3090 GPU. Besides the Movie and Tweet datasets, we test our model in new datasets (Aclimdb and Commonsense) and a new steganographic algorithm (DISCOP). Since we do not use any data from the News dataset, the results on the News dataset also demonstrate the capability of general detection of our model. The main results of general steganalysis task are shown in Table <ref>. In most cases, the detection rate and F1-score of our model are much higher than the baseline method that was trained and tested in the same dataset, with an increase in detection accuracy by approximately 10%. Compared with the results of domain-agnostic steganalysis that only uses the Movie-AC dataset, our model has significant gains. In this experiment, some different patterns are exposed. In the datasets that do not participate in training, the incorrectly detected samples are explicitly less than non-detected samples, which confirms our conjectures about over-fitting. We observed that in these datasets the incorrectly detected samples are shorter and with higher PPL, compared with the non-detected samples. Therefore, for the generality of the model, constructing a diverse dataset is significant for model training. We will continue to improve this general steganalysis model, aiming at training a practical LLM for steganalysis. § CONCLUSION In this paper, we proposed a novel steganalysis method that different from the current mainstream classification-based methods. To the best of our knowledge, we are the first to construct a generation-based steganalysis method with LLMs. To directly generate the readable detection output without any additional deep learning module, we fine-tuned LLMs with LoRA framework, maintaining the original capability of LLMs. Our method showed superior performance in different scenarios compared to all existing steganalysis methods. Results show that the detection ability of our method is based on the fluency and rationality of the text, only a minor proportion of short stegos with low PPL are likely to escape the detection. Furthermore, our models demonstrated exceptional capability in domain-agnostic tasks, providing robust detection results even when trained and tested on different datasets. This infers that LLMs gained wider-ranging characteristics of steganographic text during instruction fine-tuning, which remain effective despite significant variations in the text. With the above results, we finally fine-tuned a general steganalysis model that can be applied to various tasks. In most cases, our method gains higher detection accuracy when only tested in the dataset than baseline methods trained in the same dataset. We hope that our method will provide new insights for steganalysis in the era of LLMs. Our goal is to further improve the detection capabilities of our general steganalysis model and make it applicable to real-world situations. IEEEtran
http://arxiv.org/abs/2405.09494v1
20240515163911
Symmetry adaptation for self-consistent many-body calculations
[ "Xinyang Dong", "Emanuel Gull" ]
physics.comp-ph
[ "physics.comp-ph" ]
Beijing,AnnArbor]Xinyang Dong AnnArbor]Emanuel Gull [Beijing]AI for Science Institute, Beijing 100080, China [AnnArbor]Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA The exploitation of space group symmetries in numerical calculations of periodic crystalline solids accelerates calculations and provides physical insight. We present results for a space-group symmetry adaptation of electronic structure calculations within the finite-temperature self-consistent GW method along with an efficient parallelization scheme on accelerators. Our implementation employs the simultaneous diagonalization of the Dirac characters of the orbital representation. Results show that symmetry adaptation in self-consistent many-body codes results in substantial improvements of the runtime, and that block diagonalization on top of a restriction to the irreducible wedge results in additional speedup. § INTRODUCTION Space group symmetries play a critical role in understanding crystalline materials. They enable the interpretation of properties, such as optical or electronic response functions, in terms of irreducible representations, revealing information such as the dominant atomic character of orbitals or the set of allowed optical transitions <cit.>. In numerical calculations, the consideration of these symmetries leads to a substantial acceleration of computer codes. Standard electronic structure codes, such as implementations of the Hartree Fock, Density Functional Theory, or non-selfconsistent GW method, are predominantly formulated in terms of single-particle density matrices for which the symmetry adapation formalism is well established. In contrast, modern many-body technology relies on the repeated and numerically involved calculation of objects beyond the density matrix, such as the self-consistent evaluation of frequency-dependent propagators, vertex functions, and screened interactions. These calculations gain an even stronger advantage from considering all symmetries, including those resulting from non-symmorphic space group elements, and the repeated use of symmetrized objects due to frequency summations justify the larger initial cost of generating a fully symmetry-adapted basis. In addition, computational architectures such as GPUs and other accelerator platforms are able to efficiently utilize the parallelism exposed by the group structure, resulting in a substantial additional speedup. A rigorous, general, and numerically elegant formulation of symmetry adaptation in periodic solids has been pioneered in a series of groundbreaking papers by Dovesi and collaborators <cit.> in the context of Hartree Fock and DFT theory, and was implemented in the `crystal' code <cit.>. This paper will revisit this formalism and present results for the self-consistent GW method <cit.> on modern GPU architectures, illustrating the advantage of symmetry adaptation by block-diagonalization on top of a restriction to the irreducible wedge. In addition, we present parallelization strategies and scaling results for a set of paradigmatic solids. § MODEL AND METHOD §.§ System and Hamiltonian We consider a generic non-relativistic electronic structure setup <cit.>, Ĥ = Ĥ_0 + Û, consisting of a one-electron part Ĥ_0 and two-electron Coulomb interaction Û. We assume translation invariance and a finite system with periodic boundary condition such that crystal momentum is conserved. In second quantization, the Hamiltonian H is H = ∑_∑_ij∑_σσ' (H_0)^_iσ,jσ'c^†_iσc^𝐤_jσ' + 1/2N_k∑_ijkl∑__i_j_k_l∑_σσ'U^_i_j_k_l_ i j k lc^_i†_iσc^_k†_kσ'c^_l_lσ'c^_j_jσ, where _0 is the one-electron Hamiltonian, N_k is the number of momentum points in the Brillouin zone and c_iσ^ (†) annihilate (create) an electron in a single-particle state with momentum , spin σ and basis state (orbital) number i. Momentum conservation of the scattering implies _i + _k - _j - _l = 𝐊, where is a reciprocal lattice vector. §.§ Single-particle basis states Single-particle states are Bloch waves characterized by a momentum label , a spin label σ, and an orbital index i. In our case, they consist of linear superposition of atom-centered wave functions g_i^() = g_(x j)^(), where denotes a unit cell translation vector, x enumerates the atom inside the unit cell, j denotes its atomic orbital index, and i = (xj) is a unit-cell orbital index enumerating all orbitals in the unit cell g^_i(𝐫) = ∑_ g_i^() e^i·𝐑 = ∑_ g_(x j)^() e^i·𝐑 . While there is considerable freedom in the choice of basis functions and the remainder of the work will be valid for any function of this form, our implementation <cit.> employs standard atom-centered linear combination of Gaussian-type orbitals. § SYMMETRIES OF SOLIDS §.§ Space group symmetries A precise theory of symmetries in solid was established already in the 19th century <cit.>, and the mathematical <cit.> and physical <cit.> formalism is well developed. Computational approaches for mean field theories are also well developed <cit.>, in particular the adaptation to the “irreducible wedge”. Rather than giving an extensive description of the theory, we recapitulate use those approach that enable an efficient generalization to many-body theories, following to a large extent the work in <cit.>. We refer the intended reader to <cit.> for introductory material, <cit.> for a mathematical exposition, <cit.> for useful tables and <cit.> for practical implementation in computer codes. We consider the space-group symmetries and time-reversal. Space group symmetries consist of translations, point-group operations such as rotations or reflections, or combinations of the two. Only 230 such groups exist in three dimension. We denote symmetry elements as α̂ = {α|v(α)}, in which α is a point group operation and v(α) is a translation. Groups containing elements for which v(α) is neither zero nor a lattice translation are called non-symmorphic (there are 157 such groups); the remaining groups are called symmorphic <cit.>. §.§ Symmetry operations on orbitals The action of a symmetry operation α̂ on orbital j of atom x at momentum can be written as <cit.> α̂ g^_(xj)(𝐫) = ∑_ e^i·𝐑 [α̂ g_(x j)^()] = exp[-i ·𝐯_x^α] × [𝐎(α) g^_(xj)(𝐫)] , where α = +, with the lattice vector that shifts α back to the first Brillouin zone, 𝐯_x^α the translation vector between rotated and original atom, and (α) is the representation matrix of operation α in the basis spanned by g_(xj)(). To obtain the explicit form of ^α, we consider the angular part of the basis functions. In standard Gaussian-type bases, the angular part of atomic orbitals are real spherical harmonics Y_lm, which are linear combinations of the complex spherical harmonics Y_l^m <cit.>. The rotation of Y_l^m are characterized by the Wigner D-matrix <cit.> α Y_l^m = Y_l^m' D_m'm^l(α) , where α is a proper rotation, i.e. the rotation matrix has positive determinant. For an improper rotation α', D_mm'^l(α') = (-1)^l D_mm'^l(α), with α the corresponding proper rotation. In general, these matrices determine how different magnetic quantum number m mix for a given l. Explicit expressions for the Wigner D-matrix are given in textbooks <cit.>. The matrix ^(α̂) forms a representation of the symmetry group in orbital space. §.§ Factor groups and projective representations Symmetry operations can be used to find a symmetry-adapted wave-function basis that block-diagonalizes quantities, which each block belonging to a different irreducible representation of the symmetry group. The case of abelian groups is particularly simple, since all irreducible representations are one-dimensional. In the case of the translation group, the resulting blocks are typically labeled by their momentum index and lead to the usual momentum-space formalism. Space-group operations beyond translations may lead to further block structure at high symmetry points, which can be exploited to efficiently perform calculations. To obtain the unitary transformation matrix ^ for block diagonalization at each k point, we introduce the concepts of the little co-group, the little group, and the projective representation. For a momentum , the little co-group G_ is defined as the point symmetry subgroup of the isogonal of the space group that leaves invariant in the reciprocal space, while the little group G_ is the subgroup of elements whose rotation part α belongs to the little co-group G_ <cit.>. Finding ^ corresponds to finding the projective irreducible representations of G_ at each k point <cit.>. A projective representation of a finite group G is a set of matrices that fulfills the condition <cit.> (α) ·(β) = λ(α, β) (αβ) , α, β∈ G , where λ(α, β) is a complex unitary factor that satisfies λ(α, βγ) λ(β, γ) = λ(αβ, γ) λ(α, β) . For each k point, the projective representations and factors can be computed from the representation matrices <cit.> ^(α) = exp[i· v(α)] ^ (α̂) , λ^(α, β) = exp{i· [v(β) - α v(β)]} . For a symmorphic group, the translation vectors v(α) for all symmetry operations are 0, so that D^(α) = O^(α̂), λ^ = 1. §.§ Symmetry adaptation and block diagonalization The standard formalism of group theory proceeds to decompose a representation into its irreducible components by applying successive projections onto basis functions belonging to a particular representation <cit.>. This formalism, while convenient for analytic manipulation, requires detailed knowledge of the representations beyond what is available in character tables, and is therefore somewhat unwieldy in its application to general problems with arbitrary symmetries. A very elegant alternative to the projection was pioneered in this context by Dovesi and collaborators <cit.> and relies on a simultaneous diagonalization of Dirac characters instead of a projection. This is the formalism we follow here. We first introduce the concepts of conjugacy classes and Dirac characters. The conjugacy classes 𝐶 of a group G are defined as the subsets of elements that satisfy the relation α, γ∈𝐶 ⟺ ∃β∈𝐺,   s.t.  βγ = αβ . The Dirac character Ω_c of a conjugacy class C is defined as Ω_c = n_c/h∑_β∈ G(α) ·(γ) ·(β)^-1 = ∑_α∈ Cλ(β_α, γ) λ(α, β_α)^*(α) , with α, γ∈ C, β_α any group element such that β_αγ = αβ_α. Importantly, all Dirac characters commute with the representation of all group elements and with each other <cit.>. A simultaneous diagonalization of all Dirac characters results in a unitary transform that decomposes any basis set into subsets associated to each irreducible representation <cit.>. A numerical algorithm proceeds as follows: Given the projected orbital representation, all Dirac characters are formed using Eq. <ref>, and a character is considered as `relevant' if it is non-zero. The set of eigenvectors that simultaneously diagonalize all relevant Dirac characters can be obtained by successively diagonalizing the degenerated eigen subspace of all matrices following the methods presented in Ref. <cit.>. As the eigenvalues of Ω_C are related to the characters of the irreducible representations, the block sizes corresponding to different irreducible representations can be obtained by finding the shortest common constant pieces in all diagonalized Dirac characters. This simple simultaneous diagonalization approach may run into instability issues, and more precise algorithms such as the ones presented in Ref. <cit.> are necessary for large orbital representation matrices. The full procedure of finding the common eigen space of the Dirac characters and identifying the corresponding irreducible representations in a computationally stable way is presented in Ref. <cit.> which makes use of the Hermitian components of the Dirac characters. We further discuss the consequence of including the generalized time-reversal operator κ̅ = ϕκ, in which κ is the complex conjugation operator and ϕ is any point operator that gives the relation ϕ = - + for a given k point <cit.>. The time-reversal symmetry indicates that the matrix representation of any time-reversal independent operator at ϕ should be the complex conjugate of its matrix representation at . When = - +, the matrix representation is real. In the symmetry adaptation procedure, if a given complex vector ψ belongs to an irreducible set of the group, the pair of real vectors ψ^+ = 1/2(ψ + ψ^*) and ψ^- = 1/2i(ψ - ψ^*) belongs to the same irreducible co-representation. Therefore, it is possible to construct a real symmetry adapted basis set from the complex eigen vectors by simultaneous diagonalization <cit.>. The procedure we use to obtain ^(α̂), ^ is summarized in Algorithm <ref>. § APPLICATION TO DIAGRAMMATIC CALCULATIONS §.§ Finite-temperature GW for periodic systems In finite-temperature many-body theory, the electronic properties of a system are characterized by the single-particle Green's function G^_ij, σ(τ) = -⟨ c_iσ^(τ) c_jσ^†(0) ⟩ , where β = 1/k_B T is the inverse temperature, τ∈ [0, β] is the imaginary time, and 𝒯 the time ordering operator <cit.>. The interacting Green's function is related to the non-interacting Green's function via the Dyson equation [^(iω_n)]^-1 = [_0^(iω_n)]^-1 - ^(iω_n) = (iω_n + μ) ^ - ^_0 - ^(iω_n) , in which μ is the chemical potential, ^ is the overlap matrix of the basis functions S^_ij = ∫_Ω d𝐫 g^*_i(𝐫)g^_j(𝐫) , and ^(iω_n) is the Matsubara frequency Green's function ^(iω_n) = ∫_0^β dτ ^(τ) e^iω_n τ , with ω_n = (2n+1)π / β fermionic Matsubara frequencies. The self-energy ^(iω_n) is a function of the full Green's function ≡[], which can be split into static and dynamic parts [](iω_n) = ^(HF)[] + [] (iω_n) . In many-body simulations of realistic systems, the self-energy is usually approximated by low-order diagrams. The self-consistent GW method is a common choice; it approximates the dynamical part of the self-energy as the sum of an infinite series of RPA-like “bubble” diagrams <cit.>. The explicit equation for the GW self-energy reads (Σ̃^GW)^_iσ,jσ (τ) = -1/N_k∑_∑_ab∑_QQ' G^-_aσ,bσ(τ)V^,-_ i a(Q) ^_QQ'(τ)V^-,_ b j(Q') . In Eq. <ref>, ^_i _j decomposes the Coulomb interaction as U^_i_j_k_l_ i j k l = ∑_QV^_i_j_ i j(Q)V^_k_l_ k l(Q) , with Q an auxiliary basis index. We use density fitted interactions  <cit.> in our calculations. These tensors can be expressed as V^_i_j_ i j(Q) = ∑_P (𝐉^)^-1/2_QP C^_i_j_ i j(P) , with the relations = _j - _i and ^- = ^ *, where J^ is the two-center integral related to the auxiliary basis. The explicit expressions of the density fitting integrals can be found in Ref. <cit.>. The auxiliary function ^(τ) is defined through the geometric series 𝐏^𝐪(iΩ_n) = [𝐈 - 𝐏^𝐪_0(iΩ_n)]^-1𝐏^𝐪_0(iΩ_n) , P^_QQ(τ) = 1/β∑_nP^𝐪_QQ'(iΩ_n)e^-iΩ_nτ , where Ω_n = 2nπ / β are bosonic Matsubara frequencies. ^_0(τ) is an auxiliary function related to the `bare bubble', which is defined as P^_0,QQ'(τ) = -1/N_k∑_∑_σσ'∑_abcdV^,+_ d a(Q) × G^_cσ',dσ(-τ)G^+_aσ bσ'(τ)V^+,_ b c(Q') . The static self-energy is the Hartree-Fock self-energy obtained with the interacting density matrix, ^(HF) = ^ + ^𝐤 , J^𝐤_iσ,jσ = 1/N_k∑_𝐤'∑_σ_1∑_ab∑_QV^𝐤𝐤_ ij(Q)γ^𝐤'_aσ_1,bσ_1V^𝐤'𝐤'_ b a(Q) , K^𝐤_iσ,jσ' = -1/N_k∑_𝐤'∑_ab∑_QV^𝐤𝐤'_ i a(Q)γ^𝐤'_aσ,bσ'V^𝐤'𝐤_ b j(Q) , where γ^ = ^(τ=0^-) is the density matrix. The self-consistent GW method solves Eqs. <ref>,<ref>,<ref>,<ref> in a self-consistent manner for a system with a fixed number of electrons N_e = 1/N_k∑_Tr[γ^^]. For detailed derivations and a motivation of the formalism used here see Refs. <cit.> and <cit.>. §.§ Rotation to the irreducible Brillouin zone To obtain the solution of the self-consistent many-body equations, it is necessary to compute the Green's function and self-energies at all momentum sampling points within the first Brillouin zone. These evaluations can be simplified with symmetry considerations. Using rotation matrices that rotate orbitals between different k points is a common way of reducing computational complexity. This formalism is implemented in essentially all Hartree Fock and DFT codes, including in Refs. <cit.>. The transformation of the matrix representation of an operator X̂ that is invariant under a given symmetry operation α̂ reads 𝐗^ = 𝐎^(α̂) 𝐗^ 𝐎^†(α̂) , where X_ij^ = ⟨ g_i^ | X̂ | g_j^⟩ and α = +. In the self-consistent GW method outlined in Section <ref>, the transformations of the overlap matrix , bare Hamiltonian _0, Green's function and self-energy follow Eq. <ref> with 𝐎^(α̂) ≡^_ao(α̂) the representation matrix in atomic orbital space. Besides the calculation of matrix-valued functions, the calculation and storage of the tensor-valued decomposed interaction tensor ^_i _j also serves as a potential bottleneck in many-body calculations. The transformation of ^_i _j can be derived from its definition Eq. <ref>. The two-index matrix ^ transforms as Eq. <ref> with 𝐎^(α̂) ≡^_aux(α̂) the representation matrix in auxiliary basis space, and the three-index tensor transforms as ^_i_j = ^(α̂) ^_i(α̂) ^_i_j O^_j † (α̂) . Threrfore, the transformation of the decomposed interaction can be written as ^_i_j = (^)^-1/2^(α̂) (^)^1/2^_i(α̂) ^_i_j O^_j † (α̂) ^_i_j= ^(α̂) ^_i(α̂) ^_i_j O^_j † (α̂) , ^(α̂) = (^)^-1/2^(α̂) (^)^1/2 . Given the transformation of ^_i _j, the transformations of _0^ and ^ can be derived from their definitions in Eqs. <ref>, <ref> since the auxiliary indices contained in these two objects originated from those of ^_i _j. The rotation of _0^ and ^ reads ^ = ^(α̂) ^ ^†(α̂) . In momentum space, sets of k points connected by symmetry operations are called stars <cit.>. From each star, only one representative is chosen; properties at other elements of the star are regenerated by transformation. The set of representatives of the star delineates the irreducible wedge or irreducible Brillouin zone (IBZ). This formulation reduces the calculations from the full Brillouin zone into the irreducible wedge. All two-index matrices (which are dependent on only a single momentum index) can be computed within the IBZ and subsequently transformed to other k points using the equations outlined in Eqs. <ref>, <ref>. In tensors with more than two indices, such as three-index tensors which depend on two momenta, it is feasible to choose one momentum index confined within the IBZ while the remaining indices span the full Brillouin zone; the choice of the index within the IBZ is arbitrary. §.§ Block diagonalization In systems where a large number of momentum points lie on the surface of the IBZ, it is advantageous to further reduce the computational costs of all the tensor contractions and linear solvers by block diagonalization. The matrix at each k point can be transformed into a block diagonal form with the unitary transformation ^_block = ^†^𝐔^ . To obtain the corresponding block form of the decomposed interaction, we consider the three-index tensor and two-index matrix in Eq. <ref> separately. The three-index tensor transforms as ^_i _j _block = ^_i †^† C^_i _j ^_j , and the two-index tensor ^ can be block diagonalized using the transformation in Eq. <ref>. To ensure the block diagonalized structure of _0^ and ^, we perform block diagonalization of ^ first and then decompose each block separately to obtain (^)^-1/2 when constructing the decomposed interaction in Eq. <ref>. The rotation between block diagonalized matrices at different k points can be achieved using the block transformed rotation matrix ^_block(α̂) = ^†^(α̂) ^ . Only non-zero blocks are considered in the evaluation of diagrammatic equations. § RESULTS FOR MANY-BODY CALCULATIONS §.§ Floating point operations The computationally expensive part in the self-consistent GW calculation is the evaluation of the self-energy (Eqs. <ref>, <ref>, <ref>). As an illustrative example, we choose 4 compounds consisting of elements in group III, IV, V to show the reduction of the total number of floating-point operations with symmetry adaptation. See Table <ref> for the detail structure information of all considered compounds. All the space group analysis are performed using the software <cit.>. Table <ref> summarizes the results on a momentum mesh of size n_k × n_k × n_k centered at the Γ point with n_k = 1, 2, 4, 6. All calculations are performed using the basis and the auxiliary basis with 114 imaginary time points and 103 bosonic frequency points on an intermediate representation (IR) grid <cit.>. We use the package <cit.> to generate the integrals in Bloch wave bases and block-diagonalize them as described in section  <ref> before starting the self-consistent GW calculation. When n_k=1, the number of floating-point operations is computed using the expression for type instead of as all matrices and tensors are real in this case. As shown in the table, as the total number of k points increases, the inter-k point rotations significantly reduce the overall number of floating-point operations, while block diagonalization is particularly advantageous in systems with few momentum points. This is because with an increasing number of k points, fewer points lie on the surface of the IBZ and the number of operations in the little group G_ of most of the considered k points is low. Therefore, there will not be many zero blocks in most of the matrices, and the computational advantage of doing block diagonalization decreases <cit.>. In general, employing both rotation and block diagonalization results in roughly one order of magnitude fewer floating-point operations across all the examples considered. §.§ GPU acceleration Symmetry adaptation offers the possibility to expose otherwise inaccessible opportunities for parallelism. To facilitate the expensive self-energy evaluations, we design an acceleration scheme leveraging both MPI and CUDA parallelization. Algorithm <ref> shows the pseudo code of a GPU accelerated GW self-energy solver. As shown, the computationally expensive evaluations in Eqs. <ref>, <ref> are performed on GPU, and Eq. <ref> is evaluated on CPU using MPI parallelization. The parallelization design in Ref. <cit.> assigns the calculation of each point to one GPU card. For calculations without symmetry transformations, this design avoid storing the full auxiliary tensor ^(τ) of all points, but also limits the maximum number of GPU card used to the number of points. In contrast, in the scheme introduced here for symmetry-adapted calculations, Eq. <ref> is evaluated for in IBZ and Eq. <ref> is evaluated for in IBZ, with both ^(τ) and ^(τ) in block diagonal form. Besides reducing the flop count, this transformation also reduces the memory required for storing these quantities. We treat and in Eqs. <ref> and <ref> on equal footing by generating (_1, _2) pairs with _1 in IBZ and _2 in full BZ. Multiple asynchronous streams (parallel threads) are created on each GPU card and the calculation associated with each (_1, _2) pair is assigned to these streams. Asynchronous stream handling allows for overlap between dense tensor contractions of all (, ) pairs. For each (, ) pair, the loop over n_τ are performed in a batched way using strided batched cuBlas calls to reach peak performance. See Algorithm <ref> and <ref> for the pseudo code of GPU calculations. As each GPU card usually has less memory than a CPU node, one critical aspect of GPU acceleration is the balance between memory allocation on GPU and memory transfer between GPU and CPU. In our design, the Green's function 𝐆^(τ), self-energy ^(τ), auxiliary functions ^_0(τ), ^(τ), and rotation matrices ^_ao, ^_aux for all required momenta are stored in the shared memory of each GPU card to avoid frequent copy of memory between GPU and CPU. The decomposed interaction ^_1 _2 for all (_1, _2) pairs are usually too large to store on the GPU card, even with symmetry adaptation. Therefore, we store it either on disk or in the shared memory of each CPU node and copy data to the GPU on demand. As all streams created on the GPU cards may run in parallel, each stream stores a local copy of ^,(τ), _0^,(τ) and ^_1 _2 for a single (, ) pair, such that ^(τ) = ∑_^,(τ), _0^(τ) = ∑__0^,(τ). Fig. <ref> shows a profile of Algorithms <ref>, <ref> in our implementation. The profile is obtained on a cluster with each node containing eight V100 NVIDIA GPU cards. We use Si on a momentum mesh of size 4 × 4 × 4 centered at Γ point as the test system. The number of orbitals, auxiliary basis, and irreducible k points can be found in Tables <ref>, <ref>. Sixteen asynchronous streams are created on each GPU card. As shown in the figure, the GPU kernels of _0 and contractions shows almost ideal speedup. § CONCLUSIONS In conclusion, we have presented results for the implementation, numerical cost, and scaling of the symmetry adaptation of solids within the a self-consistent many-body formalism. Our method relied on the simultaneous diagonalization of Dirac characters <cit.> as pioneered in the electronic structure context by <cit.> and employed an efficient parallelization scheme on graphics accelerators. Multiple future directions for exploration are evident. First, while this work focused on standard space group symmetries, generalizations of the framework to magnetic space groups or `Shubnikov' groups <cit.> are straightforward and may be required for the efficient simulation of magnetic or relativistic systems. Second, the knowledge of symmetries and representations offers a straightforward connection to the interpretation of optical response functions, facilitating the interpretation and simulation of such experiments. Finally, more precise simulation methods may employ higher order terms in diagrammatic perturbation theory, such as `vertices', whose calculation will benefit both from the symmetry analysis and from the reduction in computational effort presented here. § ACKNOWLEDGMENTS EG was funded by the US National Science Foundation via grant NSF OAC 2310582. We thank Sergei Iskakov for useful comments. elsarticle-num
http://arxiv.org/abs/2405.10303v1
20240516175616
Asymmetric Warm Dark Matter: from Cosmological Asymmetry to Chirality of Life
[ "Wen Yin", "Shota Nakagawa", "Tamaki Murokoshi", "Makoto Hattori" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "astro-ph.GA", "astro-ph.IM", "physics.bio-ph" ]